00:00:00.001 Started by upstream project "autotest-per-patch" build number 122884 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.057 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.058 The recommended git tool is: git 00:00:00.058 using credential 00000000-0000-0000-0000-000000000002 00:00:00.059 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.086 Fetching changes from the remote Git repository 00:00:00.090 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.127 Using shallow fetch with depth 1 00:00:00.127 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.127 > git --version # timeout=10 00:00:00.156 > git --version # 'git version 2.39.2' 00:00:00.156 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.157 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.157 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.670 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.681 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.692 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:03.692 > git config core.sparsecheckout # timeout=10 00:00:03.706 > git read-tree -mu HEAD # timeout=10 00:00:03.720 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:03.739 Commit message: "inventory/dev: add missing long names" 00:00:03.739 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:03.826 [Pipeline] Start of Pipeline 00:00:03.840 [Pipeline] library 00:00:03.841 Loading library shm_lib@master 00:00:03.841 Library shm_lib@master is cached. Copying from home. 00:00:03.857 [Pipeline] node 00:00:03.868 Running on WFP43 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:03.870 [Pipeline] { 00:00:03.879 [Pipeline] catchError 00:00:03.880 [Pipeline] { 00:00:03.889 [Pipeline] wrap 00:00:03.897 [Pipeline] { 00:00:03.905 [Pipeline] stage 00:00:03.906 [Pipeline] { (Prologue) 00:00:04.072 [Pipeline] sh 00:00:04.356 + logger -p user.info -t JENKINS-CI 00:00:04.373 [Pipeline] echo 00:00:04.375 Node: WFP43 00:00:04.383 [Pipeline] sh 00:00:04.682 [Pipeline] setCustomBuildProperty 00:00:04.692 [Pipeline] echo 00:00:04.693 Cleanup processes 00:00:04.698 [Pipeline] sh 00:00:04.981 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:04.981 2781282 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:04.994 [Pipeline] sh 00:00:05.276 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:05.276 ++ grep -v 'sudo pgrep' 00:00:05.276 ++ awk '{print $1}' 00:00:05.276 + sudo kill -9 00:00:05.276 + true 00:00:05.289 [Pipeline] cleanWs 00:00:05.299 [WS-CLEANUP] Deleting project workspace... 00:00:05.299 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.305 [WS-CLEANUP] done 00:00:05.309 [Pipeline] setCustomBuildProperty 00:00:05.321 [Pipeline] sh 00:00:05.602 + sudo git config --global --replace-all safe.directory '*' 00:00:05.654 [Pipeline] nodesByLabel 00:00:05.655 Found a total of 1 nodes with the 'sorcerer' label 00:00:05.662 [Pipeline] httpRequest 00:00:05.666 HttpMethod: GET 00:00:05.667 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:05.671 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:05.679 Response Code: HTTP/1.1 200 OK 00:00:05.679 Success: Status code 200 is in the accepted range: 200,404 00:00:05.680 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:06.635 [Pipeline] sh 00:00:06.913 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:06.928 [Pipeline] httpRequest 00:00:06.932 HttpMethod: GET 00:00:06.932 URL: http://10.211.164.101/packages/spdk_913aa023f115d5d922e02f03575f45b620a37a2f.tar.gz 00:00:06.933 Sending request to url: http://10.211.164.101/packages/spdk_913aa023f115d5d922e02f03575f45b620a37a2f.tar.gz 00:00:06.936 Response Code: HTTP/1.1 200 OK 00:00:06.936 Success: Status code 200 is in the accepted range: 200,404 00:00:06.937 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_913aa023f115d5d922e02f03575f45b620a37a2f.tar.gz 00:00:25.343 [Pipeline] sh 00:00:25.633 + tar --no-same-owner -xf spdk_913aa023f115d5d922e02f03575f45b620a37a2f.tar.gz 00:00:28.171 [Pipeline] sh 00:00:28.447 + git -C spdk log --oneline -n5 00:00:28.447 913aa023f test/accel: DIF verify and generate copy accel functional tests refactor 00:00:28.447 0008c8571 test/accel: DIF verify copy accel functional tests 00:00:28.447 ea11a8089 examples/accel: DIF verify copy accel perf tests 00:00:28.447 4b43b7c22 lib/accel: DIF verify copy accel SW implementation 00:00:28.447 79f76d9f0 ut/raid: fix getting process thread 00:00:28.458 [Pipeline] } 00:00:28.475 [Pipeline] // stage 00:00:28.483 [Pipeline] stage 00:00:28.485 [Pipeline] { (Prepare) 00:00:28.500 [Pipeline] writeFile 00:00:28.517 [Pipeline] sh 00:00:28.790 + logger -p user.info -t JENKINS-CI 00:00:28.803 [Pipeline] sh 00:00:29.083 + logger -p user.info -t JENKINS-CI 00:00:29.095 [Pipeline] sh 00:00:29.375 + cat autorun-spdk.conf 00:00:29.375 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.375 SPDK_TEST_NVMF=1 00:00:29.375 SPDK_TEST_NVME_CLI=1 00:00:29.375 SPDK_TEST_NVMF_NICS=mlx5 00:00:29.375 SPDK_RUN_UBSAN=1 00:00:29.375 NET_TYPE=phy 00:00:29.382 RUN_NIGHTLY=0 00:00:29.388 [Pipeline] readFile 00:00:29.411 [Pipeline] withEnv 00:00:29.413 [Pipeline] { 00:00:29.429 [Pipeline] sh 00:00:29.713 + set -ex 00:00:29.713 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:00:29.713 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:29.713 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.713 ++ SPDK_TEST_NVMF=1 00:00:29.713 ++ SPDK_TEST_NVME_CLI=1 00:00:29.713 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:29.713 ++ SPDK_RUN_UBSAN=1 00:00:29.713 ++ NET_TYPE=phy 00:00:29.713 ++ RUN_NIGHTLY=0 00:00:29.713 + case $SPDK_TEST_NVMF_NICS in 00:00:29.713 + DRIVERS=mlx5_ib 00:00:29.713 + [[ -n mlx5_ib ]] 00:00:29.713 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:29.713 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:36.285 rmmod: ERROR: Module irdma is not currently loaded 00:00:36.285 rmmod: ERROR: Module i40iw is not currently loaded 00:00:36.285 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:36.285 + true 00:00:36.285 + for D in $DRIVERS 00:00:36.285 + sudo modprobe mlx5_ib 00:00:36.285 + exit 0 00:00:36.294 [Pipeline] } 00:00:36.314 [Pipeline] // withEnv 00:00:36.319 [Pipeline] } 00:00:36.337 [Pipeline] // stage 00:00:36.347 [Pipeline] catchError 00:00:36.349 [Pipeline] { 00:00:36.365 [Pipeline] timeout 00:00:36.365 Timeout set to expire in 40 min 00:00:36.366 [Pipeline] { 00:00:36.381 [Pipeline] stage 00:00:36.382 [Pipeline] { (Tests) 00:00:36.398 [Pipeline] sh 00:00:36.683 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:00:36.683 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:00:36.683 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:00:36.683 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:00:36.683 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:36.683 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:00:36.683 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:00:36.683 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:36.683 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:00:36.683 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:36.683 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:00:36.683 + source /etc/os-release 00:00:36.683 ++ NAME='Fedora Linux' 00:00:36.683 ++ VERSION='38 (Cloud Edition)' 00:00:36.683 ++ ID=fedora 00:00:36.683 ++ VERSION_ID=38 00:00:36.683 ++ VERSION_CODENAME= 00:00:36.683 ++ PLATFORM_ID=platform:f38 00:00:36.683 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:36.683 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:36.683 ++ LOGO=fedora-logo-icon 00:00:36.683 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:36.683 ++ HOME_URL=https://fedoraproject.org/ 00:00:36.683 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:36.683 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:36.683 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:36.683 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:36.683 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:36.683 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:36.683 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:36.683 ++ SUPPORT_END=2024-05-14 00:00:36.683 ++ VARIANT='Cloud Edition' 00:00:36.683 ++ VARIANT_ID=cloud 00:00:36.683 + uname -a 00:00:36.683 Linux spdk-wfp-43 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:36.683 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:00:38.589 Hugepages 00:00:38.589 node hugesize free / total 00:00:38.589 node0 1048576kB 0 / 0 00:00:38.848 node0 2048kB 0 / 0 00:00:38.848 node1 1048576kB 0 / 0 00:00:38.848 node1 2048kB 0 / 0 00:00:38.848 00:00:38.848 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:38.848 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:38.848 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:38.848 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:38.848 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:38.848 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:38.848 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:38.848 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:38.848 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:38.848 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:38.848 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:38.848 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:38.848 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:38.848 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:38.848 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:38.848 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:38.848 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:38.848 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:38.848 + rm -f /tmp/spdk-ld-path 00:00:38.848 + source autorun-spdk.conf 00:00:38.848 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.848 ++ SPDK_TEST_NVMF=1 00:00:38.848 ++ SPDK_TEST_NVME_CLI=1 00:00:38.848 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:38.848 ++ SPDK_RUN_UBSAN=1 00:00:38.848 ++ NET_TYPE=phy 00:00:38.848 ++ RUN_NIGHTLY=0 00:00:38.848 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:38.848 + [[ -n '' ]] 00:00:38.848 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:38.848 + for M in /var/spdk/build-*-manifest.txt 00:00:38.848 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:38.848 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:38.848 + for M in /var/spdk/build-*-manifest.txt 00:00:38.848 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:38.848 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:38.848 ++ uname 00:00:38.849 + [[ Linux == \L\i\n\u\x ]] 00:00:38.849 + sudo dmesg -T 00:00:39.108 + sudo dmesg --clear 00:00:39.108 + dmesg_pid=2782644 00:00:39.108 + [[ Fedora Linux == FreeBSD ]] 00:00:39.108 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:39.108 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:39.108 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:39.108 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:39.108 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:39.108 + [[ -x /usr/src/fio-static/fio ]] 00:00:39.108 + export FIO_BIN=/usr/src/fio-static/fio 00:00:39.108 + sudo dmesg -Tw 00:00:39.108 + FIO_BIN=/usr/src/fio-static/fio 00:00:39.108 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:39.108 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:39.108 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:39.108 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:39.108 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:39.108 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:39.108 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:39.108 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:39.108 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:39.108 Test configuration: 00:00:39.108 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.108 SPDK_TEST_NVMF=1 00:00:39.108 SPDK_TEST_NVME_CLI=1 00:00:39.108 SPDK_TEST_NVMF_NICS=mlx5 00:00:39.108 SPDK_RUN_UBSAN=1 00:00:39.108 NET_TYPE=phy 00:00:39.108 RUN_NIGHTLY=0 11:24:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:00:39.108 11:24:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:39.108 11:24:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:39.108 11:24:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:39.108 11:24:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.108 11:24:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.108 11:24:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.108 11:24:09 -- paths/export.sh@5 -- $ export PATH 00:00:39.108 11:24:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.108 11:24:09 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:00:39.108 11:24:09 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:39.108 11:24:09 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715765049.XXXXXX 00:00:39.108 11:24:09 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715765049.0O2z1D 00:00:39.108 11:24:09 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:39.108 11:24:09 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:39.108 11:24:09 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:00:39.108 11:24:09 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:39.108 11:24:09 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:39.108 11:24:09 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:39.108 11:24:09 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:39.108 11:24:09 -- common/autotest_common.sh@10 -- $ set +x 00:00:39.108 11:24:09 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:00:39.108 11:24:09 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:39.108 11:24:09 -- pm/common@17 -- $ local monitor 00:00:39.108 11:24:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:39.108 11:24:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:39.108 11:24:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:39.108 11:24:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:39.108 11:24:09 -- pm/common@25 -- $ sleep 1 00:00:39.108 11:24:09 -- pm/common@21 -- $ date +%s 00:00:39.108 11:24:09 -- pm/common@21 -- $ date +%s 00:00:39.108 11:24:09 -- pm/common@21 -- $ date +%s 00:00:39.108 11:24:09 -- pm/common@21 -- $ date +%s 00:00:39.108 11:24:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715765049 00:00:39.108 11:24:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715765049 00:00:39.108 11:24:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715765049 00:00:39.109 11:24:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715765049 00:00:39.109 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715765049_collect-vmstat.pm.log 00:00:39.109 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715765049_collect-cpu-load.pm.log 00:00:39.109 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715765049_collect-cpu-temp.pm.log 00:00:39.109 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715765049_collect-bmc-pm.bmc.pm.log 00:00:40.045 11:24:10 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:40.045 11:24:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:40.045 11:24:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:40.045 11:24:10 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:40.045 11:24:10 -- spdk/autobuild.sh@16 -- $ date -u 00:00:40.045 Wed May 15 09:24:10 AM UTC 2024 00:00:40.045 11:24:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:40.045 v24.05-pre-561-g913aa023f 00:00:40.045 11:24:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:40.045 11:24:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:40.045 11:24:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:40.045 11:24:10 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:40.045 11:24:10 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:40.045 11:24:10 -- common/autotest_common.sh@10 -- $ set +x 00:00:40.304 ************************************ 00:00:40.305 START TEST ubsan 00:00:40.305 ************************************ 00:00:40.305 11:24:10 -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:40.305 using ubsan 00:00:40.305 00:00:40.305 real 0m0.000s 00:00:40.305 user 0m0.000s 00:00:40.305 sys 0m0.000s 00:00:40.305 11:24:10 -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:40.305 11:24:10 -- common/autotest_common.sh@10 -- $ set +x 00:00:40.305 ************************************ 00:00:40.305 END TEST ubsan 00:00:40.305 ************************************ 00:00:40.305 11:24:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:40.305 11:24:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:40.305 11:24:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:40.305 11:24:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:40.305 11:24:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:40.305 11:24:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:40.305 11:24:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:40.305 11:24:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:40.305 11:24:10 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:40.305 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:00:40.305 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:00:40.870 Using 'verbs' RDMA provider 00:00:53.649 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:05.862 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:06.431 Creating mk/config.mk...done. 00:01:06.431 Creating mk/cc.flags.mk...done. 00:01:06.431 Type 'make' to build. 00:01:06.431 11:24:36 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:01:06.431 11:24:36 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:06.431 11:24:36 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:06.431 11:24:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.431 ************************************ 00:01:06.431 START TEST make 00:01:06.431 ************************************ 00:01:06.431 11:24:36 -- common/autotest_common.sh@1121 -- $ make -j72 00:01:06.690 make[1]: Nothing to be done for 'all'. 00:01:16.698 The Meson build system 00:01:16.698 Version: 1.3.1 00:01:16.698 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:16.698 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:16.698 Build type: native build 00:01:16.698 Program cat found: YES (/usr/bin/cat) 00:01:16.698 Project name: DPDK 00:01:16.698 Project version: 23.11.0 00:01:16.698 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:16.699 C linker for the host machine: cc ld.bfd 2.39-16 00:01:16.699 Host machine cpu family: x86_64 00:01:16.699 Host machine cpu: x86_64 00:01:16.699 Message: ## Building in Developer Mode ## 00:01:16.699 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:16.699 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:16.699 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:16.699 Program python3 found: YES (/usr/bin/python3) 00:01:16.699 Program cat found: YES (/usr/bin/cat) 00:01:16.699 Compiler for C supports arguments -march=native: YES 00:01:16.699 Checking for size of "void *" : 8 00:01:16.699 Checking for size of "void *" : 8 (cached) 00:01:16.699 Library m found: YES 00:01:16.699 Library numa found: YES 00:01:16.699 Has header "numaif.h" : YES 00:01:16.699 Library fdt found: NO 00:01:16.699 Library execinfo found: NO 00:01:16.699 Has header "execinfo.h" : YES 00:01:16.699 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:16.699 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:16.699 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:16.699 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:16.699 Run-time dependency openssl found: YES 3.0.9 00:01:16.699 Run-time dependency libpcap found: YES 1.10.4 00:01:16.699 Has header "pcap.h" with dependency libpcap: YES 00:01:16.699 Compiler for C supports arguments -Wcast-qual: YES 00:01:16.699 Compiler for C supports arguments -Wdeprecated: YES 00:01:16.699 Compiler for C supports arguments -Wformat: YES 00:01:16.699 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:16.699 Compiler for C supports arguments -Wformat-security: NO 00:01:16.699 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:16.699 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:16.699 Compiler for C supports arguments -Wnested-externs: YES 00:01:16.699 Compiler for C supports arguments -Wold-style-definition: YES 00:01:16.699 Compiler for C supports arguments -Wpointer-arith: YES 00:01:16.699 Compiler for C supports arguments -Wsign-compare: YES 00:01:16.699 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:16.699 Compiler for C supports arguments -Wundef: YES 00:01:16.699 Compiler for C supports arguments -Wwrite-strings: YES 00:01:16.699 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:16.699 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:16.699 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:16.699 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:16.699 Program objdump found: YES (/usr/bin/objdump) 00:01:16.699 Compiler for C supports arguments -mavx512f: YES 00:01:16.699 Checking if "AVX512 checking" compiles: YES 00:01:16.699 Fetching value of define "__SSE4_2__" : 1 00:01:16.699 Fetching value of define "__AES__" : 1 00:01:16.699 Fetching value of define "__AVX__" : 1 00:01:16.699 Fetching value of define "__AVX2__" : 1 00:01:16.699 Fetching value of define "__AVX512BW__" : 1 00:01:16.699 Fetching value of define "__AVX512CD__" : 1 00:01:16.699 Fetching value of define "__AVX512DQ__" : 1 00:01:16.699 Fetching value of define "__AVX512F__" : 1 00:01:16.699 Fetching value of define "__AVX512VL__" : 1 00:01:16.699 Fetching value of define "__PCLMUL__" : 1 00:01:16.699 Fetching value of define "__RDRND__" : 1 00:01:16.699 Fetching value of define "__RDSEED__" : 1 00:01:16.699 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:16.699 Fetching value of define "__znver1__" : (undefined) 00:01:16.699 Fetching value of define "__znver2__" : (undefined) 00:01:16.699 Fetching value of define "__znver3__" : (undefined) 00:01:16.699 Fetching value of define "__znver4__" : (undefined) 00:01:16.699 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:16.699 Message: lib/log: Defining dependency "log" 00:01:16.699 Message: lib/kvargs: Defining dependency "kvargs" 00:01:16.699 Message: lib/telemetry: Defining dependency "telemetry" 00:01:16.699 Checking for function "getentropy" : NO 00:01:16.699 Message: lib/eal: Defining dependency "eal" 00:01:16.699 Message: lib/ring: Defining dependency "ring" 00:01:16.699 Message: lib/rcu: Defining dependency "rcu" 00:01:16.699 Message: lib/mempool: Defining dependency "mempool" 00:01:16.699 Message: lib/mbuf: Defining dependency "mbuf" 00:01:16.699 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:16.699 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:16.699 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:16.699 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:16.699 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:16.699 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:16.699 Compiler for C supports arguments -mpclmul: YES 00:01:16.699 Compiler for C supports arguments -maes: YES 00:01:16.699 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:16.699 Compiler for C supports arguments -mavx512bw: YES 00:01:16.699 Compiler for C supports arguments -mavx512dq: YES 00:01:16.699 Compiler for C supports arguments -mavx512vl: YES 00:01:16.699 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:16.699 Compiler for C supports arguments -mavx2: YES 00:01:16.699 Compiler for C supports arguments -mavx: YES 00:01:16.699 Message: lib/net: Defining dependency "net" 00:01:16.699 Message: lib/meter: Defining dependency "meter" 00:01:16.699 Message: lib/ethdev: Defining dependency "ethdev" 00:01:16.699 Message: lib/pci: Defining dependency "pci" 00:01:16.699 Message: lib/cmdline: Defining dependency "cmdline" 00:01:16.699 Message: lib/hash: Defining dependency "hash" 00:01:16.699 Message: lib/timer: Defining dependency "timer" 00:01:16.699 Message: lib/compressdev: Defining dependency "compressdev" 00:01:16.699 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:16.699 Message: lib/dmadev: Defining dependency "dmadev" 00:01:16.699 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:16.699 Message: lib/power: Defining dependency "power" 00:01:16.699 Message: lib/reorder: Defining dependency "reorder" 00:01:16.699 Message: lib/security: Defining dependency "security" 00:01:16.699 Has header "linux/userfaultfd.h" : YES 00:01:16.699 Has header "linux/vduse.h" : YES 00:01:16.699 Message: lib/vhost: Defining dependency "vhost" 00:01:16.699 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:16.699 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:16.699 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:16.699 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:16.699 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:16.699 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:16.699 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:16.699 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:16.699 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:16.699 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:16.699 Program doxygen found: YES (/usr/bin/doxygen) 00:01:16.699 Configuring doxy-api-html.conf using configuration 00:01:16.699 Configuring doxy-api-man.conf using configuration 00:01:16.699 Program mandb found: YES (/usr/bin/mandb) 00:01:16.699 Program sphinx-build found: NO 00:01:16.699 Configuring rte_build_config.h using configuration 00:01:16.699 Message: 00:01:16.699 ================= 00:01:16.699 Applications Enabled 00:01:16.699 ================= 00:01:16.699 00:01:16.699 apps: 00:01:16.699 00:01:16.699 00:01:16.699 Message: 00:01:16.699 ================= 00:01:16.699 Libraries Enabled 00:01:16.699 ================= 00:01:16.699 00:01:16.699 libs: 00:01:16.699 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:16.699 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:16.699 cryptodev, dmadev, power, reorder, security, vhost, 00:01:16.699 00:01:16.699 Message: 00:01:16.699 =============== 00:01:16.699 Drivers Enabled 00:01:16.699 =============== 00:01:16.699 00:01:16.699 common: 00:01:16.699 00:01:16.699 bus: 00:01:16.699 pci, vdev, 00:01:16.699 mempool: 00:01:16.699 ring, 00:01:16.699 dma: 00:01:16.699 00:01:16.699 net: 00:01:16.699 00:01:16.699 crypto: 00:01:16.699 00:01:16.699 compress: 00:01:16.699 00:01:16.699 vdpa: 00:01:16.699 00:01:16.699 00:01:16.699 Message: 00:01:16.699 ================= 00:01:16.699 Content Skipped 00:01:16.699 ================= 00:01:16.699 00:01:16.699 apps: 00:01:16.699 dumpcap: explicitly disabled via build config 00:01:16.699 graph: explicitly disabled via build config 00:01:16.699 pdump: explicitly disabled via build config 00:01:16.699 proc-info: explicitly disabled via build config 00:01:16.699 test-acl: explicitly disabled via build config 00:01:16.699 test-bbdev: explicitly disabled via build config 00:01:16.699 test-cmdline: explicitly disabled via build config 00:01:16.699 test-compress-perf: explicitly disabled via build config 00:01:16.699 test-crypto-perf: explicitly disabled via build config 00:01:16.699 test-dma-perf: explicitly disabled via build config 00:01:16.699 test-eventdev: explicitly disabled via build config 00:01:16.699 test-fib: explicitly disabled via build config 00:01:16.699 test-flow-perf: explicitly disabled via build config 00:01:16.699 test-gpudev: explicitly disabled via build config 00:01:16.699 test-mldev: explicitly disabled via build config 00:01:16.699 test-pipeline: explicitly disabled via build config 00:01:16.699 test-pmd: explicitly disabled via build config 00:01:16.699 test-regex: explicitly disabled via build config 00:01:16.699 test-sad: explicitly disabled via build config 00:01:16.699 test-security-perf: explicitly disabled via build config 00:01:16.699 00:01:16.699 libs: 00:01:16.699 metrics: explicitly disabled via build config 00:01:16.699 acl: explicitly disabled via build config 00:01:16.700 bbdev: explicitly disabled via build config 00:01:16.700 bitratestats: explicitly disabled via build config 00:01:16.700 bpf: explicitly disabled via build config 00:01:16.700 cfgfile: explicitly disabled via build config 00:01:16.700 distributor: explicitly disabled via build config 00:01:16.700 efd: explicitly disabled via build config 00:01:16.700 eventdev: explicitly disabled via build config 00:01:16.700 dispatcher: explicitly disabled via build config 00:01:16.700 gpudev: explicitly disabled via build config 00:01:16.700 gro: explicitly disabled via build config 00:01:16.700 gso: explicitly disabled via build config 00:01:16.700 ip_frag: explicitly disabled via build config 00:01:16.700 jobstats: explicitly disabled via build config 00:01:16.700 latencystats: explicitly disabled via build config 00:01:16.700 lpm: explicitly disabled via build config 00:01:16.700 member: explicitly disabled via build config 00:01:16.700 pcapng: explicitly disabled via build config 00:01:16.700 rawdev: explicitly disabled via build config 00:01:16.700 regexdev: explicitly disabled via build config 00:01:16.700 mldev: explicitly disabled via build config 00:01:16.700 rib: explicitly disabled via build config 00:01:16.700 sched: explicitly disabled via build config 00:01:16.700 stack: explicitly disabled via build config 00:01:16.700 ipsec: explicitly disabled via build config 00:01:16.700 pdcp: explicitly disabled via build config 00:01:16.700 fib: explicitly disabled via build config 00:01:16.700 port: explicitly disabled via build config 00:01:16.700 pdump: explicitly disabled via build config 00:01:16.700 table: explicitly disabled via build config 00:01:16.700 pipeline: explicitly disabled via build config 00:01:16.700 graph: explicitly disabled via build config 00:01:16.700 node: explicitly disabled via build config 00:01:16.700 00:01:16.700 drivers: 00:01:16.700 common/cpt: not in enabled drivers build config 00:01:16.700 common/dpaax: not in enabled drivers build config 00:01:16.700 common/iavf: not in enabled drivers build config 00:01:16.700 common/idpf: not in enabled drivers build config 00:01:16.700 common/mvep: not in enabled drivers build config 00:01:16.700 common/octeontx: not in enabled drivers build config 00:01:16.700 bus/auxiliary: not in enabled drivers build config 00:01:16.700 bus/cdx: not in enabled drivers build config 00:01:16.700 bus/dpaa: not in enabled drivers build config 00:01:16.700 bus/fslmc: not in enabled drivers build config 00:01:16.700 bus/ifpga: not in enabled drivers build config 00:01:16.700 bus/platform: not in enabled drivers build config 00:01:16.700 bus/vmbus: not in enabled drivers build config 00:01:16.700 common/cnxk: not in enabled drivers build config 00:01:16.700 common/mlx5: not in enabled drivers build config 00:01:16.700 common/nfp: not in enabled drivers build config 00:01:16.700 common/qat: not in enabled drivers build config 00:01:16.700 common/sfc_efx: not in enabled drivers build config 00:01:16.700 mempool/bucket: not in enabled drivers build config 00:01:16.700 mempool/cnxk: not in enabled drivers build config 00:01:16.700 mempool/dpaa: not in enabled drivers build config 00:01:16.700 mempool/dpaa2: not in enabled drivers build config 00:01:16.700 mempool/octeontx: not in enabled drivers build config 00:01:16.700 mempool/stack: not in enabled drivers build config 00:01:16.700 dma/cnxk: not in enabled drivers build config 00:01:16.700 dma/dpaa: not in enabled drivers build config 00:01:16.700 dma/dpaa2: not in enabled drivers build config 00:01:16.700 dma/hisilicon: not in enabled drivers build config 00:01:16.700 dma/idxd: not in enabled drivers build config 00:01:16.700 dma/ioat: not in enabled drivers build config 00:01:16.700 dma/skeleton: not in enabled drivers build config 00:01:16.700 net/af_packet: not in enabled drivers build config 00:01:16.700 net/af_xdp: not in enabled drivers build config 00:01:16.700 net/ark: not in enabled drivers build config 00:01:16.700 net/atlantic: not in enabled drivers build config 00:01:16.700 net/avp: not in enabled drivers build config 00:01:16.700 net/axgbe: not in enabled drivers build config 00:01:16.700 net/bnx2x: not in enabled drivers build config 00:01:16.700 net/bnxt: not in enabled drivers build config 00:01:16.700 net/bonding: not in enabled drivers build config 00:01:16.700 net/cnxk: not in enabled drivers build config 00:01:16.700 net/cpfl: not in enabled drivers build config 00:01:16.700 net/cxgbe: not in enabled drivers build config 00:01:16.700 net/dpaa: not in enabled drivers build config 00:01:16.700 net/dpaa2: not in enabled drivers build config 00:01:16.700 net/e1000: not in enabled drivers build config 00:01:16.700 net/ena: not in enabled drivers build config 00:01:16.700 net/enetc: not in enabled drivers build config 00:01:16.700 net/enetfec: not in enabled drivers build config 00:01:16.700 net/enic: not in enabled drivers build config 00:01:16.700 net/failsafe: not in enabled drivers build config 00:01:16.700 net/fm10k: not in enabled drivers build config 00:01:16.700 net/gve: not in enabled drivers build config 00:01:16.700 net/hinic: not in enabled drivers build config 00:01:16.700 net/hns3: not in enabled drivers build config 00:01:16.700 net/i40e: not in enabled drivers build config 00:01:16.700 net/iavf: not in enabled drivers build config 00:01:16.700 net/ice: not in enabled drivers build config 00:01:16.700 net/idpf: not in enabled drivers build config 00:01:16.700 net/igc: not in enabled drivers build config 00:01:16.700 net/ionic: not in enabled drivers build config 00:01:16.700 net/ipn3ke: not in enabled drivers build config 00:01:16.700 net/ixgbe: not in enabled drivers build config 00:01:16.700 net/mana: not in enabled drivers build config 00:01:16.700 net/memif: not in enabled drivers build config 00:01:16.700 net/mlx4: not in enabled drivers build config 00:01:16.700 net/mlx5: not in enabled drivers build config 00:01:16.700 net/mvneta: not in enabled drivers build config 00:01:16.700 net/mvpp2: not in enabled drivers build config 00:01:16.700 net/netvsc: not in enabled drivers build config 00:01:16.700 net/nfb: not in enabled drivers build config 00:01:16.700 net/nfp: not in enabled drivers build config 00:01:16.700 net/ngbe: not in enabled drivers build config 00:01:16.700 net/null: not in enabled drivers build config 00:01:16.700 net/octeontx: not in enabled drivers build config 00:01:16.700 net/octeon_ep: not in enabled drivers build config 00:01:16.700 net/pcap: not in enabled drivers build config 00:01:16.700 net/pfe: not in enabled drivers build config 00:01:16.700 net/qede: not in enabled drivers build config 00:01:16.700 net/ring: not in enabled drivers build config 00:01:16.700 net/sfc: not in enabled drivers build config 00:01:16.700 net/softnic: not in enabled drivers build config 00:01:16.700 net/tap: not in enabled drivers build config 00:01:16.700 net/thunderx: not in enabled drivers build config 00:01:16.700 net/txgbe: not in enabled drivers build config 00:01:16.700 net/vdev_netvsc: not in enabled drivers build config 00:01:16.700 net/vhost: not in enabled drivers build config 00:01:16.700 net/virtio: not in enabled drivers build config 00:01:16.700 net/vmxnet3: not in enabled drivers build config 00:01:16.700 raw/*: missing internal dependency, "rawdev" 00:01:16.700 crypto/armv8: not in enabled drivers build config 00:01:16.700 crypto/bcmfs: not in enabled drivers build config 00:01:16.700 crypto/caam_jr: not in enabled drivers build config 00:01:16.700 crypto/ccp: not in enabled drivers build config 00:01:16.700 crypto/cnxk: not in enabled drivers build config 00:01:16.700 crypto/dpaa_sec: not in enabled drivers build config 00:01:16.700 crypto/dpaa2_sec: not in enabled drivers build config 00:01:16.700 crypto/ipsec_mb: not in enabled drivers build config 00:01:16.700 crypto/mlx5: not in enabled drivers build config 00:01:16.700 crypto/mvsam: not in enabled drivers build config 00:01:16.700 crypto/nitrox: not in enabled drivers build config 00:01:16.700 crypto/null: not in enabled drivers build config 00:01:16.700 crypto/octeontx: not in enabled drivers build config 00:01:16.700 crypto/openssl: not in enabled drivers build config 00:01:16.700 crypto/scheduler: not in enabled drivers build config 00:01:16.700 crypto/uadk: not in enabled drivers build config 00:01:16.700 crypto/virtio: not in enabled drivers build config 00:01:16.700 compress/isal: not in enabled drivers build config 00:01:16.700 compress/mlx5: not in enabled drivers build config 00:01:16.700 compress/octeontx: not in enabled drivers build config 00:01:16.700 compress/zlib: not in enabled drivers build config 00:01:16.700 regex/*: missing internal dependency, "regexdev" 00:01:16.700 ml/*: missing internal dependency, "mldev" 00:01:16.700 vdpa/ifc: not in enabled drivers build config 00:01:16.700 vdpa/mlx5: not in enabled drivers build config 00:01:16.700 vdpa/nfp: not in enabled drivers build config 00:01:16.700 vdpa/sfc: not in enabled drivers build config 00:01:16.700 event/*: missing internal dependency, "eventdev" 00:01:16.700 baseband/*: missing internal dependency, "bbdev" 00:01:16.700 gpu/*: missing internal dependency, "gpudev" 00:01:16.700 00:01:16.700 00:01:16.700 Build targets in project: 85 00:01:16.700 00:01:16.700 DPDK 23.11.0 00:01:16.700 00:01:16.700 User defined options 00:01:16.700 buildtype : debug 00:01:16.700 default_library : shared 00:01:16.700 libdir : lib 00:01:16.700 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:16.700 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:16.700 c_link_args : 00:01:16.700 cpu_instruction_set: native 00:01:16.700 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:16.700 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:16.700 enable_docs : false 00:01:16.700 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:16.700 enable_kmods : false 00:01:16.700 tests : false 00:01:16.700 00:01:16.700 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:16.700 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:16.700 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:16.700 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:16.700 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:16.700 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:16.700 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:16.701 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:16.701 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:16.701 [8/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:16.701 [9/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:16.701 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:16.701 [11/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:16.701 [12/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:16.701 [13/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:16.701 [14/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:16.701 [15/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:16.701 [16/265] Linking static target lib/librte_log.a 00:01:16.701 [17/265] Linking static target lib/librte_kvargs.a 00:01:16.701 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:16.701 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:16.701 [20/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:16.701 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:16.701 [22/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:16.701 [23/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:16.701 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:16.701 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:16.701 [26/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:16.701 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:16.701 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:16.701 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:16.701 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:16.701 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:16.701 [32/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:16.701 [33/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:16.701 [34/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:16.701 [35/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:16.701 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:16.701 [37/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:16.701 [38/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:16.701 [39/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:16.701 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:16.701 [41/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:16.701 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:16.701 [43/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:16.701 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:16.701 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:16.701 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:16.701 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:16.701 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:16.701 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:16.701 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:16.701 [51/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:16.701 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:16.701 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:16.701 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:16.701 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:16.701 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:16.701 [57/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:16.701 [58/265] Linking static target lib/librte_telemetry.a 00:01:16.701 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:16.701 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:16.701 [61/265] Linking static target lib/librte_ring.a 00:01:16.701 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:16.701 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:16.701 [64/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:16.701 [65/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:16.701 [66/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:16.701 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:16.701 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:16.701 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:16.701 [70/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:16.701 [71/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:16.701 [72/265] Linking static target lib/librte_pci.a 00:01:16.701 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:16.701 [74/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:16.701 [75/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:16.701 [76/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:16.701 [77/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:16.701 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:16.701 [79/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:16.701 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:16.701 [81/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:16.701 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:16.701 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:16.701 [84/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:16.701 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:16.701 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:16.701 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:16.701 [88/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:16.701 [89/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:16.701 [90/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:16.701 [91/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:16.701 [92/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.701 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:16.701 [94/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:16.701 [95/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:16.701 [96/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:16.701 [97/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:16.701 [98/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:16.701 [99/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:16.701 [100/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:16.701 [101/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:16.701 [102/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:16.701 [103/265] Linking static target lib/librte_mempool.a 00:01:16.701 [104/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:16.701 [105/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:16.701 [106/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:16.701 [107/265] Linking static target lib/librte_meter.a 00:01:16.701 [108/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:16.701 [109/265] Linking static target lib/librte_rcu.a 00:01:16.701 [110/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:16.701 [111/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:16.701 [112/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:16.701 [113/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:16.701 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:16.701 [115/265] Linking static target lib/librte_net.a 00:01:16.701 [116/265] Linking static target lib/librte_eal.a 00:01:16.701 [117/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:16.701 [118/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:16.701 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:16.701 [120/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.701 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:16.701 [122/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.701 [123/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.960 [124/265] Linking target lib/librte_log.so.24.0 00:01:16.960 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:16.960 [126/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.960 [127/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:16.960 [128/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:16.960 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:16.960 [130/265] Linking static target lib/librte_mbuf.a 00:01:16.960 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:16.960 [132/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:16.960 [133/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.960 [134/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.960 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:16.960 [136/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.960 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:16.960 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:16.960 [139/265] Linking static target lib/librte_cmdline.a 00:01:16.960 [140/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:16.960 [141/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:16.960 [142/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:16.960 [143/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:16.960 [144/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:16.960 [145/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:16.960 [146/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:16.960 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:16.960 [148/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:16.960 [149/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:16.960 [150/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:16.960 [151/265] Linking static target lib/librte_timer.a 00:01:16.960 [152/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:16.960 [153/265] Linking static target lib/librte_dmadev.a 00:01:16.960 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:16.960 [155/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:16.960 [156/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:16.960 [157/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:16.960 [158/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:16.960 [159/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:16.960 [160/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:16.960 [161/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:16.960 [162/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:16.960 [163/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:16.960 [164/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:16.960 [165/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:16.960 [166/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:16.960 [167/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:16.960 [168/265] Linking target lib/librte_telemetry.so.24.0 00:01:16.960 [169/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:16.960 [170/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:16.960 [171/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:16.960 [172/265] Linking target lib/librte_kvargs.so.24.0 00:01:16.960 [173/265] Linking static target lib/librte_compressdev.a 00:01:16.960 [174/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:16.960 [175/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:17.219 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:17.219 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:17.219 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:17.219 [179/265] Linking static target lib/librte_power.a 00:01:17.219 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:17.219 [181/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:17.219 [182/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:17.219 [183/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:17.219 [184/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:17.219 [185/265] Linking static target lib/librte_reorder.a 00:01:17.219 [186/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:17.219 [187/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:17.219 [188/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:17.219 [189/265] Linking static target lib/librte_security.a 00:01:17.219 [190/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:17.219 [191/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:17.219 [192/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:17.219 [193/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:17.219 [194/265] Linking static target lib/librte_hash.a 00:01:17.219 [195/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:17.219 [196/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:17.219 [197/265] Linking static target drivers/librte_bus_vdev.a 00:01:17.219 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:17.219 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:17.219 [200/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:17.219 [201/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.219 [202/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:17.219 [203/265] Linking static target drivers/librte_bus_pci.a 00:01:17.219 [204/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:17.477 [205/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:17.477 [206/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:17.477 [207/265] Linking static target drivers/librte_mempool_ring.a 00:01:17.477 [208/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:17.477 [209/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.477 [210/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.477 [211/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:17.477 [212/265] Linking static target lib/librte_cryptodev.a 00:01:17.736 [213/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.736 [214/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.736 [215/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.736 [216/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.736 [217/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.736 [218/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:17.995 [219/265] Linking static target lib/librte_ethdev.a 00:01:17.995 [220/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:17.995 [221/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.995 [222/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.995 [223/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.253 [224/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.820 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:18.820 [226/265] Linking static target lib/librte_vhost.a 00:01:19.755 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.126 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.671 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.603 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.861 [231/265] Linking target lib/librte_eal.so.24.0 00:01:28.861 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:28.861 [233/265] Linking target lib/librte_timer.so.24.0 00:01:28.861 [234/265] Linking target lib/librte_ring.so.24.0 00:01:28.861 [235/265] Linking target lib/librte_meter.so.24.0 00:01:28.861 [236/265] Linking target lib/librte_pci.so.24.0 00:01:28.861 [237/265] Linking target lib/librte_dmadev.so.24.0 00:01:28.861 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:29.119 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:29.119 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:29.119 [241/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:29.119 [242/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:29.119 [243/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:29.119 [244/265] Linking target lib/librte_rcu.so.24.0 00:01:29.119 [245/265] Linking target lib/librte_mempool.so.24.0 00:01:29.119 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:29.378 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:29.378 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:29.378 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:29.378 [250/265] Linking target lib/librte_mbuf.so.24.0 00:01:29.378 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:29.636 [252/265] Linking target lib/librte_reorder.so.24.0 00:01:29.636 [253/265] Linking target lib/librte_net.so.24.0 00:01:29.636 [254/265] Linking target lib/librte_compressdev.so.24.0 00:01:29.636 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:29.636 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:29.636 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:29.636 [258/265] Linking target lib/librte_hash.so.24.0 00:01:29.636 [259/265] Linking target lib/librte_security.so.24.0 00:01:29.636 [260/265] Linking target lib/librte_cmdline.so.24.0 00:01:29.636 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:29.893 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:29.893 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:29.893 [264/265] Linking target lib/librte_power.so.24.0 00:01:29.893 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:29.893 INFO: autodetecting backend as ninja 00:01:29.893 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 72 00:01:31.267 CC lib/log/log.o 00:01:31.267 CC lib/log/log_flags.o 00:01:31.267 CC lib/log/log_deprecated.o 00:01:31.267 CC lib/ut/ut.o 00:01:31.267 CC lib/ut_mock/mock.o 00:01:31.267 LIB libspdk_log.a 00:01:31.267 LIB libspdk_ut.a 00:01:31.267 LIB libspdk_ut_mock.a 00:01:31.267 SO libspdk_ut.so.2.0 00:01:31.267 SO libspdk_log.so.7.0 00:01:31.267 SO libspdk_ut_mock.so.6.0 00:01:31.267 SYMLINK libspdk_ut.so 00:01:31.267 SYMLINK libspdk_log.so 00:01:31.267 SYMLINK libspdk_ut_mock.so 00:01:31.525 CC lib/dma/dma.o 00:01:31.525 CXX lib/trace_parser/trace.o 00:01:31.525 CC lib/ioat/ioat.o 00:01:31.525 CC lib/util/base64.o 00:01:31.525 CC lib/util/bit_array.o 00:01:31.783 CC lib/util/crc32.o 00:01:31.783 CC lib/util/cpuset.o 00:01:31.783 CC lib/util/crc16.o 00:01:31.783 CC lib/util/crc32_ieee.o 00:01:31.783 CC lib/util/crc32c.o 00:01:31.783 CC lib/util/crc64.o 00:01:31.783 CC lib/util/dif.o 00:01:31.783 CC lib/util/fd.o 00:01:31.783 CC lib/util/file.o 00:01:31.783 CC lib/util/hexlify.o 00:01:31.783 CC lib/util/iov.o 00:01:31.783 CC lib/util/strerror_tls.o 00:01:31.783 CC lib/util/math.o 00:01:31.783 CC lib/util/pipe.o 00:01:31.783 CC lib/util/string.o 00:01:31.783 CC lib/util/fd_group.o 00:01:31.783 CC lib/util/uuid.o 00:01:31.783 CC lib/util/xor.o 00:01:31.783 CC lib/util/zipf.o 00:01:31.783 CC lib/vfio_user/host/vfio_user_pci.o 00:01:31.783 CC lib/vfio_user/host/vfio_user.o 00:01:31.783 LIB libspdk_dma.a 00:01:31.783 SO libspdk_dma.so.4.0 00:01:31.783 SYMLINK libspdk_dma.so 00:01:31.783 LIB libspdk_ioat.a 00:01:32.040 SO libspdk_ioat.so.7.0 00:01:32.040 SYMLINK libspdk_ioat.so 00:01:32.040 LIB libspdk_vfio_user.a 00:01:32.040 SO libspdk_vfio_user.so.5.0 00:01:32.040 LIB libspdk_util.a 00:01:32.040 SYMLINK libspdk_vfio_user.so 00:01:32.040 SO libspdk_util.so.9.0 00:01:32.298 SYMLINK libspdk_util.so 00:01:32.298 LIB libspdk_trace_parser.a 00:01:32.298 SO libspdk_trace_parser.so.5.0 00:01:32.556 SYMLINK libspdk_trace_parser.so 00:01:32.556 CC lib/conf/conf.o 00:01:32.556 CC lib/json/json_parse.o 00:01:32.556 CC lib/json/json_write.o 00:01:32.556 CC lib/json/json_util.o 00:01:32.556 CC lib/idxd/idxd.o 00:01:32.556 CC lib/idxd/idxd_user.o 00:01:32.556 CC lib/rdma/common.o 00:01:32.556 CC lib/rdma/rdma_verbs.o 00:01:32.556 CC lib/vmd/led.o 00:01:32.556 CC lib/vmd/vmd.o 00:01:32.556 CC lib/env_dpdk/env.o 00:01:32.556 CC lib/env_dpdk/memory.o 00:01:32.556 CC lib/env_dpdk/pci.o 00:01:32.556 CC lib/env_dpdk/init.o 00:01:32.556 CC lib/env_dpdk/pci_ioat.o 00:01:32.556 CC lib/env_dpdk/threads.o 00:01:32.556 CC lib/env_dpdk/pci_virtio.o 00:01:32.556 CC lib/env_dpdk/pci_idxd.o 00:01:32.556 CC lib/env_dpdk/pci_vmd.o 00:01:32.556 CC lib/env_dpdk/pci_event.o 00:01:32.556 CC lib/env_dpdk/sigbus_handler.o 00:01:32.556 CC lib/env_dpdk/pci_dpdk.o 00:01:32.556 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:32.556 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:32.814 LIB libspdk_conf.a 00:01:32.814 SO libspdk_conf.so.6.0 00:01:32.814 LIB libspdk_rdma.a 00:01:32.814 LIB libspdk_json.a 00:01:33.072 SO libspdk_rdma.so.6.0 00:01:33.072 SO libspdk_json.so.6.0 00:01:33.072 SYMLINK libspdk_conf.so 00:01:33.072 SYMLINK libspdk_rdma.so 00:01:33.072 SYMLINK libspdk_json.so 00:01:33.072 LIB libspdk_idxd.a 00:01:33.072 SO libspdk_idxd.so.12.0 00:01:33.072 SYMLINK libspdk_idxd.so 00:01:33.072 LIB libspdk_vmd.a 00:01:33.330 SO libspdk_vmd.so.6.0 00:01:33.330 SYMLINK libspdk_vmd.so 00:01:33.330 CC lib/jsonrpc/jsonrpc_server.o 00:01:33.330 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:33.330 CC lib/jsonrpc/jsonrpc_client.o 00:01:33.330 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:33.588 LIB libspdk_jsonrpc.a 00:01:33.588 SO libspdk_jsonrpc.so.6.0 00:01:33.588 SYMLINK libspdk_jsonrpc.so 00:01:33.847 LIB libspdk_env_dpdk.a 00:01:33.847 SO libspdk_env_dpdk.so.14.0 00:01:33.847 SYMLINK libspdk_env_dpdk.so 00:01:33.847 CC lib/rpc/rpc.o 00:01:34.107 LIB libspdk_rpc.a 00:01:34.107 SO libspdk_rpc.so.6.0 00:01:34.366 SYMLINK libspdk_rpc.so 00:01:34.624 CC lib/notify/notify_rpc.o 00:01:34.624 CC lib/notify/notify.o 00:01:34.624 CC lib/trace/trace.o 00:01:34.624 CC lib/trace/trace_rpc.o 00:01:34.624 CC lib/trace/trace_flags.o 00:01:34.624 CC lib/keyring/keyring.o 00:01:34.624 CC lib/keyring/keyring_rpc.o 00:01:34.882 LIB libspdk_notify.a 00:01:34.882 SO libspdk_notify.so.6.0 00:01:34.882 LIB libspdk_keyring.a 00:01:34.882 LIB libspdk_trace.a 00:01:34.882 SO libspdk_keyring.so.1.0 00:01:34.882 SYMLINK libspdk_notify.so 00:01:34.882 SO libspdk_trace.so.10.0 00:01:34.882 SYMLINK libspdk_keyring.so 00:01:34.882 SYMLINK libspdk_trace.so 00:01:35.139 CC lib/sock/sock_rpc.o 00:01:35.139 CC lib/sock/sock.o 00:01:35.397 CC lib/thread/thread.o 00:01:35.397 CC lib/thread/iobuf.o 00:01:35.655 LIB libspdk_sock.a 00:01:35.655 SO libspdk_sock.so.9.0 00:01:35.655 SYMLINK libspdk_sock.so 00:01:35.913 CC lib/nvme/nvme_ctrlr.o 00:01:35.913 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:35.913 CC lib/nvme/nvme_fabric.o 00:01:35.913 CC lib/nvme/nvme_ns_cmd.o 00:01:35.913 CC lib/nvme/nvme_ns.o 00:01:35.913 CC lib/nvme/nvme_pcie_common.o 00:01:35.913 CC lib/nvme/nvme_pcie.o 00:01:35.913 CC lib/nvme/nvme_qpair.o 00:01:35.913 CC lib/nvme/nvme.o 00:01:35.913 CC lib/nvme/nvme_transport.o 00:01:35.913 CC lib/nvme/nvme_quirks.o 00:01:35.913 CC lib/nvme/nvme_discovery.o 00:01:35.913 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:35.913 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:35.913 CC lib/nvme/nvme_tcp.o 00:01:35.913 CC lib/nvme/nvme_opal.o 00:01:35.913 CC lib/nvme/nvme_io_msg.o 00:01:35.913 CC lib/nvme/nvme_poll_group.o 00:01:35.913 CC lib/nvme/nvme_zns.o 00:01:35.913 CC lib/nvme/nvme_stubs.o 00:01:35.913 CC lib/nvme/nvme_auth.o 00:01:35.913 CC lib/nvme/nvme_cuse.o 00:01:35.913 CC lib/nvme/nvme_rdma.o 00:01:36.480 LIB libspdk_thread.a 00:01:36.480 SO libspdk_thread.so.10.0 00:01:36.480 SYMLINK libspdk_thread.so 00:01:36.737 CC lib/virtio/virtio.o 00:01:36.737 CC lib/virtio/virtio_vhost_user.o 00:01:36.737 CC lib/virtio/virtio_vfio_user.o 00:01:36.737 CC lib/virtio/virtio_pci.o 00:01:36.737 CC lib/blob/blobstore.o 00:01:36.737 CC lib/blob/blob_bs_dev.o 00:01:36.737 CC lib/blob/request.o 00:01:36.737 CC lib/blob/zeroes.o 00:01:36.737 CC lib/accel/accel.o 00:01:36.737 CC lib/accel/accel_rpc.o 00:01:36.737 CC lib/accel/accel_sw.o 00:01:36.737 CC lib/init/json_config.o 00:01:36.737 CC lib/init/rpc.o 00:01:36.737 CC lib/init/subsystem.o 00:01:36.737 CC lib/init/subsystem_rpc.o 00:01:36.995 LIB libspdk_init.a 00:01:36.995 LIB libspdk_virtio.a 00:01:36.995 SO libspdk_init.so.5.0 00:01:36.995 SO libspdk_virtio.so.7.0 00:01:37.254 SYMLINK libspdk_init.so 00:01:37.254 SYMLINK libspdk_virtio.so 00:01:37.513 CC lib/event/app.o 00:01:37.513 CC lib/event/reactor.o 00:01:37.513 CC lib/event/app_rpc.o 00:01:37.513 CC lib/event/log_rpc.o 00:01:37.513 CC lib/event/scheduler_static.o 00:01:37.513 LIB libspdk_accel.a 00:01:37.513 SO libspdk_accel.so.15.0 00:01:37.772 LIB libspdk_nvme.a 00:01:37.772 SYMLINK libspdk_accel.so 00:01:37.772 SO libspdk_nvme.so.13.0 00:01:37.772 LIB libspdk_event.a 00:01:37.772 SO libspdk_event.so.13.0 00:01:38.030 SYMLINK libspdk_event.so 00:01:38.030 CC lib/bdev/bdev.o 00:01:38.030 CC lib/bdev/bdev_zone.o 00:01:38.030 CC lib/bdev/bdev_rpc.o 00:01:38.030 CC lib/bdev/scsi_nvme.o 00:01:38.030 CC lib/bdev/part.o 00:01:38.030 SYMLINK libspdk_nvme.so 00:01:38.967 LIB libspdk_blob.a 00:01:38.967 SO libspdk_blob.so.11.0 00:01:38.967 SYMLINK libspdk_blob.so 00:01:39.227 CC lib/blobfs/blobfs.o 00:01:39.227 CC lib/blobfs/tree.o 00:01:39.227 CC lib/lvol/lvol.o 00:01:39.798 LIB libspdk_bdev.a 00:01:39.798 LIB libspdk_blobfs.a 00:01:40.057 SO libspdk_blobfs.so.10.0 00:01:40.057 SO libspdk_bdev.so.15.0 00:01:40.057 LIB libspdk_lvol.a 00:01:40.057 SYMLINK libspdk_blobfs.so 00:01:40.057 SO libspdk_lvol.so.10.0 00:01:40.057 SYMLINK libspdk_bdev.so 00:01:40.057 SYMLINK libspdk_lvol.so 00:01:40.321 CC lib/scsi/dev.o 00:01:40.321 CC lib/scsi/port.o 00:01:40.321 CC lib/scsi/lun.o 00:01:40.321 CC lib/nvmf/ctrlr.o 00:01:40.321 CC lib/scsi/scsi.o 00:01:40.321 CC lib/nvmf/ctrlr_discovery.o 00:01:40.321 CC lib/scsi/scsi_bdev.o 00:01:40.321 CC lib/scsi/scsi_rpc.o 00:01:40.321 CC lib/nvmf/ctrlr_bdev.o 00:01:40.321 CC lib/scsi/scsi_pr.o 00:01:40.321 CC lib/nvmf/subsystem.o 00:01:40.321 CC lib/scsi/task.o 00:01:40.321 CC lib/nvmf/nvmf.o 00:01:40.321 CC lib/nvmf/nvmf_rpc.o 00:01:40.321 CC lib/nvmf/transport.o 00:01:40.321 CC lib/nvmf/tcp.o 00:01:40.321 CC lib/nvmf/stubs.o 00:01:40.321 CC lib/nbd/nbd.o 00:01:40.321 CC lib/nvmf/rdma.o 00:01:40.321 CC lib/nbd/nbd_rpc.o 00:01:40.321 CC lib/nvmf/auth.o 00:01:40.321 CC lib/ftl/ftl_core.o 00:01:40.321 CC lib/ftl/ftl_init.o 00:01:40.321 CC lib/ftl/ftl_layout.o 00:01:40.321 CC lib/ftl/ftl_debug.o 00:01:40.321 CC lib/ftl/ftl_io.o 00:01:40.321 CC lib/ftl/ftl_sb.o 00:01:40.321 CC lib/ftl/ftl_l2p.o 00:01:40.321 CC lib/ftl/ftl_l2p_flat.o 00:01:40.321 CC lib/ftl/ftl_band_ops.o 00:01:40.321 CC lib/ftl/ftl_nv_cache.o 00:01:40.321 CC lib/ftl/ftl_band.o 00:01:40.321 CC lib/ublk/ublk.o 00:01:40.321 CC lib/ublk/ublk_rpc.o 00:01:40.321 CC lib/ftl/ftl_reloc.o 00:01:40.321 CC lib/ftl/ftl_writer.o 00:01:40.321 CC lib/ftl/ftl_rq.o 00:01:40.321 CC lib/ftl/ftl_l2p_cache.o 00:01:40.321 CC lib/ftl/ftl_p2l.o 00:01:40.321 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:40.321 CC lib/ftl/mngt/ftl_mngt.o 00:01:40.321 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:40.321 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:40.321 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:40.321 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:40.321 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:40.321 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:40.321 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:40.321 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:40.321 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:40.321 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:40.321 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:40.321 CC lib/ftl/utils/ftl_conf.o 00:01:40.321 CC lib/ftl/utils/ftl_mempool.o 00:01:40.321 CC lib/ftl/utils/ftl_md.o 00:01:40.321 CC lib/ftl/utils/ftl_bitmap.o 00:01:40.321 CC lib/ftl/utils/ftl_property.o 00:01:40.321 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:40.321 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:40.321 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:40.321 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:40.321 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:40.321 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:40.321 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:40.321 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:40.321 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:40.321 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:40.321 CC lib/ftl/base/ftl_base_bdev.o 00:01:40.321 CC lib/ftl/base/ftl_base_dev.o 00:01:40.321 CC lib/ftl/ftl_trace.o 00:01:40.959 LIB libspdk_nbd.a 00:01:40.959 SO libspdk_nbd.so.7.0 00:01:40.959 LIB libspdk_scsi.a 00:01:40.959 SYMLINK libspdk_nbd.so 00:01:41.218 SO libspdk_scsi.so.9.0 00:01:41.218 LIB libspdk_ublk.a 00:01:41.218 SO libspdk_ublk.so.3.0 00:01:41.218 SYMLINK libspdk_scsi.so 00:01:41.218 SYMLINK libspdk_ublk.so 00:01:41.218 LIB libspdk_ftl.a 00:01:41.476 SO libspdk_ftl.so.9.0 00:01:41.476 CC lib/iscsi/conn.o 00:01:41.476 CC lib/iscsi/iscsi.o 00:01:41.476 CC lib/iscsi/md5.o 00:01:41.476 CC lib/iscsi/init_grp.o 00:01:41.476 CC lib/iscsi/portal_grp.o 00:01:41.476 CC lib/iscsi/param.o 00:01:41.476 CC lib/iscsi/iscsi_subsystem.o 00:01:41.476 CC lib/iscsi/tgt_node.o 00:01:41.476 CC lib/iscsi/task.o 00:01:41.476 CC lib/iscsi/iscsi_rpc.o 00:01:41.476 CC lib/vhost/vhost.o 00:01:41.476 CC lib/vhost/vhost_scsi.o 00:01:41.476 CC lib/vhost/vhost_rpc.o 00:01:41.476 CC lib/vhost/rte_vhost_user.o 00:01:41.476 CC lib/vhost/vhost_blk.o 00:01:41.734 SYMLINK libspdk_ftl.so 00:01:42.301 LIB libspdk_nvmf.a 00:01:42.301 SO libspdk_nvmf.so.18.0 00:01:42.301 LIB libspdk_vhost.a 00:01:42.301 SYMLINK libspdk_nvmf.so 00:01:42.301 SO libspdk_vhost.so.8.0 00:01:42.558 SYMLINK libspdk_vhost.so 00:01:42.558 LIB libspdk_iscsi.a 00:01:42.558 SO libspdk_iscsi.so.8.0 00:01:42.816 SYMLINK libspdk_iscsi.so 00:01:43.383 CC module/env_dpdk/env_dpdk_rpc.o 00:01:43.383 CC module/blob/bdev/blob_bdev.o 00:01:43.383 CC module/accel/ioat/accel_ioat_rpc.o 00:01:43.383 CC module/accel/ioat/accel_ioat.o 00:01:43.383 CC module/scheduler/gscheduler/gscheduler.o 00:01:43.383 LIB libspdk_env_dpdk_rpc.a 00:01:43.383 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:43.383 CC module/keyring/file/keyring.o 00:01:43.383 CC module/keyring/file/keyring_rpc.o 00:01:43.383 CC module/sock/posix/posix.o 00:01:43.383 CC module/accel/iaa/accel_iaa.o 00:01:43.383 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:43.383 CC module/accel/iaa/accel_iaa_rpc.o 00:01:43.383 CC module/accel/dsa/accel_dsa_rpc.o 00:01:43.383 CC module/accel/dsa/accel_dsa.o 00:01:43.383 CC module/accel/error/accel_error_rpc.o 00:01:43.383 CC module/accel/error/accel_error.o 00:01:43.383 SO libspdk_env_dpdk_rpc.so.6.0 00:01:43.641 SYMLINK libspdk_env_dpdk_rpc.so 00:01:43.641 LIB libspdk_keyring_file.a 00:01:43.641 LIB libspdk_scheduler_gscheduler.a 00:01:43.641 LIB libspdk_scheduler_dpdk_governor.a 00:01:43.641 LIB libspdk_accel_ioat.a 00:01:43.641 LIB libspdk_scheduler_dynamic.a 00:01:43.641 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:43.641 SO libspdk_keyring_file.so.1.0 00:01:43.641 SO libspdk_scheduler_gscheduler.so.4.0 00:01:43.641 LIB libspdk_accel_error.a 00:01:43.641 SO libspdk_scheduler_dynamic.so.4.0 00:01:43.641 LIB libspdk_accel_iaa.a 00:01:43.641 SO libspdk_accel_ioat.so.6.0 00:01:43.641 LIB libspdk_blob_bdev.a 00:01:43.641 SYMLINK libspdk_keyring_file.so 00:01:43.641 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:43.641 LIB libspdk_accel_dsa.a 00:01:43.641 SO libspdk_accel_error.so.2.0 00:01:43.641 SO libspdk_accel_iaa.so.3.0 00:01:43.641 SO libspdk_blob_bdev.so.11.0 00:01:43.641 SYMLINK libspdk_scheduler_gscheduler.so 00:01:43.641 SYMLINK libspdk_scheduler_dynamic.so 00:01:43.641 SYMLINK libspdk_accel_ioat.so 00:01:43.641 SO libspdk_accel_dsa.so.5.0 00:01:43.641 SYMLINK libspdk_accel_error.so 00:01:43.641 SYMLINK libspdk_blob_bdev.so 00:01:43.905 SYMLINK libspdk_accel_iaa.so 00:01:43.905 SYMLINK libspdk_accel_dsa.so 00:01:44.164 LIB libspdk_sock_posix.a 00:01:44.164 SO libspdk_sock_posix.so.6.0 00:01:44.164 SYMLINK libspdk_sock_posix.so 00:01:44.164 CC module/bdev/raid/bdev_raid.o 00:01:44.164 CC module/bdev/raid/bdev_raid_rpc.o 00:01:44.164 CC module/bdev/raid/raid0.o 00:01:44.164 CC module/bdev/raid/concat.o 00:01:44.164 CC module/bdev/raid/raid1.o 00:01:44.164 CC module/bdev/raid/bdev_raid_sb.o 00:01:44.164 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:44.164 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:44.164 CC module/bdev/delay/vbdev_delay.o 00:01:44.164 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:44.164 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:44.164 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:44.164 CC module/bdev/lvol/vbdev_lvol.o 00:01:44.164 CC module/bdev/malloc/bdev_malloc.o 00:01:44.164 CC module/bdev/null/bdev_null_rpc.o 00:01:44.164 CC module/blobfs/bdev/blobfs_bdev.o 00:01:44.164 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:44.164 CC module/bdev/null/bdev_null.o 00:01:44.164 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:44.164 CC module/bdev/passthru/vbdev_passthru.o 00:01:44.164 CC module/bdev/error/vbdev_error.o 00:01:44.164 CC module/bdev/error/vbdev_error_rpc.o 00:01:44.164 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:44.164 CC module/bdev/iscsi/bdev_iscsi.o 00:01:44.164 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:44.164 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:44.164 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:44.164 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:44.164 CC module/bdev/nvme/nvme_rpc.o 00:01:44.164 CC module/bdev/nvme/bdev_nvme.o 00:01:44.164 CC module/bdev/gpt/gpt.o 00:01:44.164 CC module/bdev/nvme/bdev_mdns_client.o 00:01:44.164 CC module/bdev/gpt/vbdev_gpt.o 00:01:44.164 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:44.164 CC module/bdev/nvme/vbdev_opal.o 00:01:44.164 CC module/bdev/aio/bdev_aio.o 00:01:44.164 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:44.164 CC module/bdev/aio/bdev_aio_rpc.o 00:01:44.164 CC module/bdev/ftl/bdev_ftl.o 00:01:44.164 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:44.164 CC module/bdev/split/vbdev_split.o 00:01:44.164 CC module/bdev/split/vbdev_split_rpc.o 00:01:44.426 LIB libspdk_blobfs_bdev.a 00:01:44.426 SO libspdk_blobfs_bdev.so.6.0 00:01:44.426 LIB libspdk_bdev_split.a 00:01:44.426 LIB libspdk_bdev_error.a 00:01:44.426 LIB libspdk_bdev_null.a 00:01:44.426 LIB libspdk_bdev_gpt.a 00:01:44.426 SYMLINK libspdk_blobfs_bdev.so 00:01:44.426 SO libspdk_bdev_error.so.6.0 00:01:44.426 SO libspdk_bdev_split.so.6.0 00:01:44.426 LIB libspdk_bdev_ftl.a 00:01:44.426 LIB libspdk_bdev_passthru.a 00:01:44.426 SO libspdk_bdev_null.so.6.0 00:01:44.690 SO libspdk_bdev_gpt.so.6.0 00:01:44.690 LIB libspdk_bdev_delay.a 00:01:44.690 LIB libspdk_bdev_iscsi.a 00:01:44.690 SO libspdk_bdev_ftl.so.6.0 00:01:44.690 LIB libspdk_bdev_aio.a 00:01:44.690 SO libspdk_bdev_passthru.so.6.0 00:01:44.690 SO libspdk_bdev_iscsi.so.6.0 00:01:44.690 SYMLINK libspdk_bdev_error.so 00:01:44.690 SO libspdk_bdev_delay.so.6.0 00:01:44.690 SYMLINK libspdk_bdev_split.so 00:01:44.690 SYMLINK libspdk_bdev_null.so 00:01:44.690 LIB libspdk_bdev_zone_block.a 00:01:44.690 SO libspdk_bdev_aio.so.6.0 00:01:44.690 SYMLINK libspdk_bdev_gpt.so 00:01:44.690 SYMLINK libspdk_bdev_ftl.so 00:01:44.690 SO libspdk_bdev_zone_block.so.6.0 00:01:44.690 LIB libspdk_bdev_malloc.a 00:01:44.690 SYMLINK libspdk_bdev_passthru.so 00:01:44.690 SYMLINK libspdk_bdev_iscsi.so 00:01:44.690 SYMLINK libspdk_bdev_delay.so 00:01:44.690 LIB libspdk_bdev_virtio.a 00:01:44.690 LIB libspdk_bdev_lvol.a 00:01:44.690 SYMLINK libspdk_bdev_aio.so 00:01:44.690 SO libspdk_bdev_malloc.so.6.0 00:01:44.690 SYMLINK libspdk_bdev_zone_block.so 00:01:44.690 SO libspdk_bdev_virtio.so.6.0 00:01:44.690 SO libspdk_bdev_lvol.so.6.0 00:01:44.690 SYMLINK libspdk_bdev_malloc.so 00:01:44.690 SYMLINK libspdk_bdev_virtio.so 00:01:44.690 SYMLINK libspdk_bdev_lvol.so 00:01:44.950 LIB libspdk_bdev_raid.a 00:01:45.209 SO libspdk_bdev_raid.so.6.0 00:01:45.209 SYMLINK libspdk_bdev_raid.so 00:01:46.148 LIB libspdk_bdev_nvme.a 00:01:46.148 SO libspdk_bdev_nvme.so.7.0 00:01:46.148 SYMLINK libspdk_bdev_nvme.so 00:01:46.717 CC module/event/subsystems/vmd/vmd.o 00:01:46.717 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:46.717 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:46.717 CC module/event/subsystems/sock/sock.o 00:01:46.717 CC module/event/subsystems/scheduler/scheduler.o 00:01:46.717 CC module/event/subsystems/keyring/keyring.o 00:01:46.717 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:46.717 CC module/event/subsystems/iobuf/iobuf.o 00:01:46.977 LIB libspdk_event_keyring.a 00:01:46.977 LIB libspdk_event_scheduler.a 00:01:46.977 LIB libspdk_event_vhost_blk.a 00:01:46.977 LIB libspdk_event_vmd.a 00:01:46.977 LIB libspdk_event_iobuf.a 00:01:46.977 LIB libspdk_event_sock.a 00:01:46.977 SO libspdk_event_keyring.so.1.0 00:01:46.977 SO libspdk_event_vhost_blk.so.3.0 00:01:46.977 SO libspdk_event_scheduler.so.4.0 00:01:46.977 SO libspdk_event_iobuf.so.3.0 00:01:46.977 SO libspdk_event_sock.so.5.0 00:01:46.977 SO libspdk_event_vmd.so.6.0 00:01:46.977 SYMLINK libspdk_event_keyring.so 00:01:46.977 SYMLINK libspdk_event_vhost_blk.so 00:01:46.977 SYMLINK libspdk_event_sock.so 00:01:46.977 SYMLINK libspdk_event_scheduler.so 00:01:46.977 SYMLINK libspdk_event_iobuf.so 00:01:46.977 SYMLINK libspdk_event_vmd.so 00:01:47.237 CC module/event/subsystems/accel/accel.o 00:01:47.497 LIB libspdk_event_accel.a 00:01:47.497 SO libspdk_event_accel.so.6.0 00:01:47.497 SYMLINK libspdk_event_accel.so 00:01:48.063 CC module/event/subsystems/bdev/bdev.o 00:01:48.063 LIB libspdk_event_bdev.a 00:01:48.063 SO libspdk_event_bdev.so.6.0 00:01:48.064 SYMLINK libspdk_event_bdev.so 00:01:48.632 CC module/event/subsystems/scsi/scsi.o 00:01:48.632 CC module/event/subsystems/nbd/nbd.o 00:01:48.632 CC module/event/subsystems/ublk/ublk.o 00:01:48.632 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:48.632 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:48.632 LIB libspdk_event_scsi.a 00:01:48.632 LIB libspdk_event_nbd.a 00:01:48.632 LIB libspdk_event_ublk.a 00:01:48.632 SO libspdk_event_scsi.so.6.0 00:01:48.632 SO libspdk_event_nbd.so.6.0 00:01:48.632 SO libspdk_event_ublk.so.3.0 00:01:48.632 LIB libspdk_event_nvmf.a 00:01:48.632 SYMLINK libspdk_event_scsi.so 00:01:48.632 SYMLINK libspdk_event_nbd.so 00:01:48.891 SO libspdk_event_nvmf.so.6.0 00:01:48.891 SYMLINK libspdk_event_ublk.so 00:01:48.891 SYMLINK libspdk_event_nvmf.so 00:01:49.149 CC module/event/subsystems/iscsi/iscsi.o 00:01:49.149 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:49.149 LIB libspdk_event_vhost_scsi.a 00:01:49.149 LIB libspdk_event_iscsi.a 00:01:49.149 SO libspdk_event_iscsi.so.6.0 00:01:49.149 SO libspdk_event_vhost_scsi.so.3.0 00:01:49.410 SYMLINK libspdk_event_iscsi.so 00:01:49.410 SYMLINK libspdk_event_vhost_scsi.so 00:01:49.410 SO libspdk.so.6.0 00:01:49.410 SYMLINK libspdk.so 00:01:49.979 CC app/spdk_lspci/spdk_lspci.o 00:01:49.979 CXX app/trace/trace.o 00:01:49.979 CC app/spdk_nvme_identify/identify.o 00:01:49.979 CC app/trace_record/trace_record.o 00:01:49.979 TEST_HEADER include/spdk/accel.h 00:01:49.979 TEST_HEADER include/spdk/accel_module.h 00:01:49.979 TEST_HEADER include/spdk/assert.h 00:01:49.979 TEST_HEADER include/spdk/barrier.h 00:01:49.979 CC app/spdk_nvme_perf/perf.o 00:01:49.979 TEST_HEADER include/spdk/base64.h 00:01:49.979 TEST_HEADER include/spdk/bdev.h 00:01:49.979 TEST_HEADER include/spdk/bdev_module.h 00:01:49.979 CC app/spdk_top/spdk_top.o 00:01:49.979 CC app/spdk_nvme_discover/discovery_aer.o 00:01:49.979 TEST_HEADER include/spdk/bdev_zone.h 00:01:49.979 TEST_HEADER include/spdk/bit_array.h 00:01:49.979 TEST_HEADER include/spdk/bit_pool.h 00:01:49.979 TEST_HEADER include/spdk/blob_bdev.h 00:01:49.979 CC test/rpc_client/rpc_client_test.o 00:01:49.979 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:49.979 TEST_HEADER include/spdk/blobfs.h 00:01:49.979 TEST_HEADER include/spdk/blob.h 00:01:49.979 TEST_HEADER include/spdk/conf.h 00:01:49.979 TEST_HEADER include/spdk/config.h 00:01:49.979 TEST_HEADER include/spdk/cpuset.h 00:01:49.979 TEST_HEADER include/spdk/crc16.h 00:01:49.979 TEST_HEADER include/spdk/crc32.h 00:01:49.979 TEST_HEADER include/spdk/crc64.h 00:01:49.979 TEST_HEADER include/spdk/dif.h 00:01:49.979 TEST_HEADER include/spdk/dma.h 00:01:49.979 TEST_HEADER include/spdk/endian.h 00:01:49.979 TEST_HEADER include/spdk/env_dpdk.h 00:01:49.979 CC app/nvmf_tgt/nvmf_main.o 00:01:49.979 TEST_HEADER include/spdk/env.h 00:01:49.979 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:49.979 CC app/vhost/vhost.o 00:01:49.979 TEST_HEADER include/spdk/event.h 00:01:49.979 CC app/iscsi_tgt/iscsi_tgt.o 00:01:49.979 TEST_HEADER include/spdk/fd_group.h 00:01:49.979 CC app/spdk_dd/spdk_dd.o 00:01:49.979 TEST_HEADER include/spdk/fd.h 00:01:49.979 TEST_HEADER include/spdk/file.h 00:01:49.979 TEST_HEADER include/spdk/ftl.h 00:01:49.979 TEST_HEADER include/spdk/gpt_spec.h 00:01:49.979 TEST_HEADER include/spdk/hexlify.h 00:01:49.979 TEST_HEADER include/spdk/histogram_data.h 00:01:49.979 TEST_HEADER include/spdk/idxd.h 00:01:49.979 TEST_HEADER include/spdk/idxd_spec.h 00:01:49.979 TEST_HEADER include/spdk/init.h 00:01:49.979 CC app/spdk_tgt/spdk_tgt.o 00:01:49.979 TEST_HEADER include/spdk/ioat.h 00:01:49.979 TEST_HEADER include/spdk/ioat_spec.h 00:01:49.979 TEST_HEADER include/spdk/iscsi_spec.h 00:01:49.979 TEST_HEADER include/spdk/json.h 00:01:49.979 TEST_HEADER include/spdk/jsonrpc.h 00:01:49.979 CC test/event/event_perf/event_perf.o 00:01:49.979 CC test/event/reactor_perf/reactor_perf.o 00:01:49.980 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:49.980 TEST_HEADER include/spdk/keyring.h 00:01:49.980 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:49.980 CC examples/nvme/hotplug/hotplug.o 00:01:49.980 CC examples/nvme/arbitration/arbitration.o 00:01:49.980 CC examples/nvme/hello_world/hello_world.o 00:01:49.980 CC test/app/jsoncat/jsoncat.o 00:01:49.980 TEST_HEADER include/spdk/keyring_module.h 00:01:49.980 TEST_HEADER include/spdk/likely.h 00:01:49.980 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:49.980 TEST_HEADER include/spdk/log.h 00:01:49.980 CC examples/nvme/reconnect/reconnect.o 00:01:49.980 CC test/nvme/reset/reset.o 00:01:49.980 TEST_HEADER include/spdk/lvol.h 00:01:49.980 CC test/nvme/overhead/overhead.o 00:01:49.980 CC test/event/reactor/reactor.o 00:01:49.980 CC test/nvme/reserve/reserve.o 00:01:49.980 CC examples/ioat/verify/verify.o 00:01:49.980 CC test/app/histogram_perf/histogram_perf.o 00:01:49.980 CC test/env/pci/pci_ut.o 00:01:49.980 CC app/fio/nvme/fio_plugin.o 00:01:49.980 TEST_HEADER include/spdk/memory.h 00:01:49.980 CC examples/vmd/lsvmd/lsvmd.o 00:01:49.980 TEST_HEADER include/spdk/mmio.h 00:01:49.980 CC test/env/vtophys/vtophys.o 00:01:49.980 CC test/nvme/aer/aer.o 00:01:49.980 TEST_HEADER include/spdk/nbd.h 00:01:49.980 CC examples/idxd/perf/perf.o 00:01:49.980 CC test/nvme/e2edp/nvme_dp.o 00:01:49.980 CC test/nvme/startup/startup.o 00:01:49.980 CC examples/accel/perf/accel_perf.o 00:01:49.980 TEST_HEADER include/spdk/notify.h 00:01:49.980 CC test/nvme/err_injection/err_injection.o 00:01:49.980 CC test/thread/poller_perf/poller_perf.o 00:01:49.980 TEST_HEADER include/spdk/nvme.h 00:01:49.980 CC examples/util/zipf/zipf.o 00:01:49.980 CC test/app/stub/stub.o 00:01:49.980 CC examples/ioat/perf/perf.o 00:01:49.980 TEST_HEADER include/spdk/nvme_intel.h 00:01:49.980 CC test/nvme/connect_stress/connect_stress.o 00:01:49.980 CC examples/nvme/abort/abort.o 00:01:49.980 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:49.980 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:49.980 CC test/env/memory/memory_ut.o 00:01:49.980 CC examples/vmd/led/led.o 00:01:49.980 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:49.980 CC test/event/app_repeat/app_repeat.o 00:01:49.980 CC test/nvme/boot_partition/boot_partition.o 00:01:49.980 TEST_HEADER include/spdk/nvme_spec.h 00:01:49.980 CC test/nvme/compliance/nvme_compliance.o 00:01:49.980 CC test/nvme/sgl/sgl.o 00:01:49.980 CC examples/sock/hello_world/hello_sock.o 00:01:49.980 TEST_HEADER include/spdk/nvme_zns.h 00:01:49.980 CC test/app/bdev_svc/bdev_svc.o 00:01:49.980 CC test/nvme/simple_copy/simple_copy.o 00:01:49.980 CC examples/bdev/hello_world/hello_bdev.o 00:01:49.980 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:49.980 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:49.980 CC app/fio/bdev/fio_plugin.o 00:01:49.980 TEST_HEADER include/spdk/nvmf.h 00:01:49.980 CC test/bdev/bdevio/bdevio.o 00:01:49.980 TEST_HEADER include/spdk/nvmf_spec.h 00:01:49.980 CC test/event/scheduler/scheduler.o 00:01:49.980 CC examples/blob/cli/blobcli.o 00:01:49.980 TEST_HEADER include/spdk/nvmf_transport.h 00:01:49.980 CC test/dma/test_dma/test_dma.o 00:01:49.980 TEST_HEADER include/spdk/opal.h 00:01:49.980 TEST_HEADER include/spdk/opal_spec.h 00:01:49.980 LINK spdk_lspci 00:01:49.980 CC test/blobfs/mkfs/mkfs.o 00:01:49.980 TEST_HEADER include/spdk/pci_ids.h 00:01:50.244 TEST_HEADER include/spdk/pipe.h 00:01:50.244 CC examples/nvmf/nvmf/nvmf.o 00:01:50.244 CC test/accel/dif/dif.o 00:01:50.244 CC examples/thread/thread/thread_ex.o 00:01:50.244 TEST_HEADER include/spdk/queue.h 00:01:50.244 CC examples/blob/hello_world/hello_blob.o 00:01:50.244 TEST_HEADER include/spdk/reduce.h 00:01:50.244 CC examples/bdev/bdevperf/bdevperf.o 00:01:50.244 TEST_HEADER include/spdk/rpc.h 00:01:50.244 TEST_HEADER include/spdk/scheduler.h 00:01:50.244 TEST_HEADER include/spdk/scsi.h 00:01:50.244 TEST_HEADER include/spdk/scsi_spec.h 00:01:50.244 TEST_HEADER include/spdk/sock.h 00:01:50.244 TEST_HEADER include/spdk/stdinc.h 00:01:50.244 TEST_HEADER include/spdk/string.h 00:01:50.244 TEST_HEADER include/spdk/thread.h 00:01:50.244 TEST_HEADER include/spdk/trace.h 00:01:50.244 TEST_HEADER include/spdk/trace_parser.h 00:01:50.244 TEST_HEADER include/spdk/tree.h 00:01:50.244 TEST_HEADER include/spdk/ublk.h 00:01:50.244 TEST_HEADER include/spdk/util.h 00:01:50.244 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:50.244 TEST_HEADER include/spdk/uuid.h 00:01:50.244 CC test/env/mem_callbacks/mem_callbacks.o 00:01:50.244 TEST_HEADER include/spdk/version.h 00:01:50.244 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:50.244 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:50.244 LINK spdk_nvme_discover 00:01:50.244 TEST_HEADER include/spdk/vhost.h 00:01:50.244 TEST_HEADER include/spdk/vmd.h 00:01:50.244 CC test/lvol/esnap/esnap.o 00:01:50.244 TEST_HEADER include/spdk/xor.h 00:01:50.244 TEST_HEADER include/spdk/zipf.h 00:01:50.244 CXX test/cpp_headers/accel.o 00:01:50.244 LINK vhost 00:01:50.244 LINK rpc_client_test 00:01:50.244 LINK spdk_trace_record 00:01:50.244 LINK event_perf 00:01:50.244 LINK jsoncat 00:01:50.244 LINK nvmf_tgt 00:01:50.244 LINK reactor_perf 00:01:50.244 LINK histogram_perf 00:01:50.244 LINK pmr_persistence 00:01:50.244 LINK interrupt_tgt 00:01:50.244 LINK lsvmd 00:01:50.244 LINK app_repeat 00:01:50.244 LINK reactor 00:01:50.244 LINK startup 00:01:50.244 LINK env_dpdk_post_init 00:01:50.244 LINK iscsi_tgt 00:01:50.244 LINK vtophys 00:01:50.506 LINK cmb_copy 00:01:50.506 LINK reserve 00:01:50.506 LINK led 00:01:50.506 LINK spdk_tgt 00:01:50.506 LINK zipf 00:01:50.506 LINK poller_perf 00:01:50.506 LINK verify 00:01:50.506 LINK hotplug 00:01:50.506 LINK stub 00:01:50.506 LINK hello_world 00:01:50.506 LINK err_injection 00:01:50.506 LINK boot_partition 00:01:50.506 LINK bdev_svc 00:01:50.506 LINK connect_stress 00:01:50.506 LINK mkfs 00:01:50.506 LINK simple_copy 00:01:50.506 LINK ioat_perf 00:01:50.506 LINK nvme_dp 00:01:50.506 LINK sgl 00:01:50.506 LINK reset 00:01:50.506 LINK hello_sock 00:01:50.506 LINK spdk_dd 00:01:50.506 LINK hello_bdev 00:01:50.506 LINK spdk_trace 00:01:50.506 LINK scheduler 00:01:50.506 LINK thread 00:01:50.506 LINK arbitration 00:01:50.506 LINK overhead 00:01:50.506 LINK aer 00:01:50.506 CC test/nvme/fused_ordering/fused_ordering.o 00:01:50.506 LINK hello_blob 00:01:50.506 CXX test/cpp_headers/accel_module.o 00:01:50.506 LINK reconnect 00:01:50.507 LINK nvmf 00:01:50.507 CXX test/cpp_headers/assert.o 00:01:50.771 LINK idxd_perf 00:01:50.771 CXX test/cpp_headers/barrier.o 00:01:50.771 LINK pci_ut 00:01:50.771 LINK nvme_compliance 00:01:50.771 CXX test/cpp_headers/base64.o 00:01:50.771 CXX test/cpp_headers/bdev.o 00:01:50.771 LINK abort 00:01:50.771 LINK bdevio 00:01:50.771 CXX test/cpp_headers/bdev_module.o 00:01:50.771 CXX test/cpp_headers/bdev_zone.o 00:01:50.771 CXX test/cpp_headers/bit_array.o 00:01:50.771 CXX test/cpp_headers/bit_pool.o 00:01:50.771 CXX test/cpp_headers/blob_bdev.o 00:01:50.771 CXX test/cpp_headers/blobfs_bdev.o 00:01:50.771 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:50.771 CXX test/cpp_headers/blobfs.o 00:01:50.771 CXX test/cpp_headers/blob.o 00:01:50.771 CXX test/cpp_headers/conf.o 00:01:50.771 CXX test/cpp_headers/config.o 00:01:50.771 CXX test/cpp_headers/cpuset.o 00:01:50.771 CC test/nvme/fdp/fdp.o 00:01:50.771 CXX test/cpp_headers/crc16.o 00:01:50.771 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:50.771 CXX test/cpp_headers/crc32.o 00:01:50.771 CC test/nvme/cuse/cuse.o 00:01:50.771 CXX test/cpp_headers/crc64.o 00:01:50.771 LINK nvme_manage 00:01:50.771 LINK test_dma 00:01:50.771 CXX test/cpp_headers/dif.o 00:01:50.771 CXX test/cpp_headers/dma.o 00:01:50.771 CXX test/cpp_headers/endian.o 00:01:50.771 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:50.771 CXX test/cpp_headers/env_dpdk.o 00:01:50.771 CXX test/cpp_headers/env.o 00:01:50.771 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:50.771 CXX test/cpp_headers/event.o 00:01:50.771 CXX test/cpp_headers/fd_group.o 00:01:50.771 CXX test/cpp_headers/fd.o 00:01:50.771 CXX test/cpp_headers/file.o 00:01:50.771 CXX test/cpp_headers/ftl.o 00:01:50.771 LINK dif 00:01:50.771 LINK nvme_fuzz 00:01:50.771 CXX test/cpp_headers/gpt_spec.o 00:01:50.771 CXX test/cpp_headers/hexlify.o 00:01:50.771 CXX test/cpp_headers/histogram_data.o 00:01:50.771 LINK accel_perf 00:01:50.771 CXX test/cpp_headers/idxd.o 00:01:50.771 CXX test/cpp_headers/idxd_spec.o 00:01:50.771 CXX test/cpp_headers/init.o 00:01:50.771 CXX test/cpp_headers/ioat.o 00:01:51.035 CXX test/cpp_headers/ioat_spec.o 00:01:51.035 CXX test/cpp_headers/iscsi_spec.o 00:01:51.035 LINK spdk_bdev 00:01:51.035 CXX test/cpp_headers/json.o 00:01:51.035 CXX test/cpp_headers/jsonrpc.o 00:01:51.035 CXX test/cpp_headers/keyring.o 00:01:51.035 CXX test/cpp_headers/keyring_module.o 00:01:51.035 CXX test/cpp_headers/likely.o 00:01:51.035 CXX test/cpp_headers/log.o 00:01:51.035 LINK fused_ordering 00:01:51.035 CXX test/cpp_headers/lvol.o 00:01:51.035 CXX test/cpp_headers/memory.o 00:01:51.035 CXX test/cpp_headers/mmio.o 00:01:51.035 CXX test/cpp_headers/nbd.o 00:01:51.035 LINK blobcli 00:01:51.035 CXX test/cpp_headers/notify.o 00:01:51.035 LINK spdk_nvme 00:01:51.035 CXX test/cpp_headers/nvme_intel.o 00:01:51.035 CXX test/cpp_headers/nvme.o 00:01:51.035 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:51.035 CXX test/cpp_headers/nvme_ocssd.o 00:01:51.035 CXX test/cpp_headers/nvme_spec.o 00:01:51.035 CXX test/cpp_headers/nvme_zns.o 00:01:51.035 CXX test/cpp_headers/nvmf_cmd.o 00:01:51.035 LINK doorbell_aers 00:01:51.035 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:51.035 CXX test/cpp_headers/nvmf.o 00:01:51.035 CXX test/cpp_headers/nvmf_spec.o 00:01:51.035 CXX test/cpp_headers/opal.o 00:01:51.035 CXX test/cpp_headers/nvmf_transport.o 00:01:51.035 LINK spdk_nvme_perf 00:01:51.035 LINK mem_callbacks 00:01:51.035 CXX test/cpp_headers/opal_spec.o 00:01:51.035 CXX test/cpp_headers/pci_ids.o 00:01:51.035 CXX test/cpp_headers/pipe.o 00:01:51.035 CXX test/cpp_headers/queue.o 00:01:51.295 CXX test/cpp_headers/reduce.o 00:01:51.295 CXX test/cpp_headers/rpc.o 00:01:51.295 CXX test/cpp_headers/scheduler.o 00:01:51.295 CXX test/cpp_headers/scsi_spec.o 00:01:51.295 CXX test/cpp_headers/scsi.o 00:01:51.295 CXX test/cpp_headers/sock.o 00:01:51.295 CXX test/cpp_headers/stdinc.o 00:01:51.295 LINK spdk_top 00:01:51.295 CXX test/cpp_headers/string.o 00:01:51.295 CXX test/cpp_headers/thread.o 00:01:51.295 CXX test/cpp_headers/trace.o 00:01:51.295 CXX test/cpp_headers/trace_parser.o 00:01:51.295 CXX test/cpp_headers/tree.o 00:01:51.295 CXX test/cpp_headers/ublk.o 00:01:51.295 CXX test/cpp_headers/util.o 00:01:51.295 CXX test/cpp_headers/uuid.o 00:01:51.295 CXX test/cpp_headers/version.o 00:01:51.295 CXX test/cpp_headers/vfio_user_pci.o 00:01:51.295 LINK spdk_nvme_identify 00:01:51.295 CXX test/cpp_headers/vfio_user_spec.o 00:01:51.295 CXX test/cpp_headers/vhost.o 00:01:51.295 CXX test/cpp_headers/vmd.o 00:01:51.295 CXX test/cpp_headers/xor.o 00:01:51.295 CXX test/cpp_headers/zipf.o 00:01:51.295 LINK bdevperf 00:01:51.553 LINK fdp 00:01:51.553 LINK vhost_fuzz 00:01:51.553 LINK memory_ut 00:01:52.118 LINK cuse 00:01:52.375 LINK iscsi_fuzz 00:01:54.270 LINK esnap 00:01:54.527 00:01:54.527 real 0m48.135s 00:01:54.527 user 6m52.307s 00:01:54.527 sys 3m9.791s 00:01:54.527 11:25:25 -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:54.527 11:25:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.527 ************************************ 00:01:54.527 END TEST make 00:01:54.527 ************************************ 00:01:54.527 11:25:25 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:54.527 11:25:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:54.527 11:25:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:54.527 11:25:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:54.527 11:25:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:54.527 11:25:25 -- pm/common@44 -- $ pid=2782679 00:01:54.527 11:25:25 -- pm/common@50 -- $ kill -TERM 2782679 00:01:54.527 11:25:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:54.527 11:25:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:54.527 11:25:25 -- pm/common@44 -- $ pid=2782680 00:01:54.527 11:25:25 -- pm/common@50 -- $ kill -TERM 2782680 00:01:54.527 11:25:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:54.527 11:25:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:54.527 11:25:25 -- pm/common@44 -- $ pid=2782681 00:01:54.528 11:25:25 -- pm/common@50 -- $ kill -TERM 2782681 00:01:54.528 11:25:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:54.528 11:25:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:54.528 11:25:25 -- pm/common@44 -- $ pid=2782708 00:01:54.528 11:25:25 -- pm/common@50 -- $ sudo -E kill -TERM 2782708 00:01:54.786 11:25:25 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:01:54.786 11:25:25 -- nvmf/common.sh@7 -- # uname -s 00:01:54.786 11:25:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:54.786 11:25:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:54.786 11:25:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:54.786 11:25:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:54.786 11:25:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:54.786 11:25:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:54.786 11:25:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:54.786 11:25:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:54.786 11:25:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:54.786 11:25:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:54.786 11:25:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:01:54.786 11:25:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:01:54.786 11:25:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:54.786 11:25:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:54.786 11:25:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:54.786 11:25:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:54.786 11:25:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:54.786 11:25:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:54.786 11:25:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:54.786 11:25:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:54.786 11:25:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:54.786 11:25:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:54.786 11:25:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:54.786 11:25:25 -- paths/export.sh@5 -- # export PATH 00:01:54.786 11:25:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:54.786 11:25:25 -- nvmf/common.sh@47 -- # : 0 00:01:54.786 11:25:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:54.786 11:25:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:54.786 11:25:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:54.786 11:25:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:54.786 11:25:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:54.786 11:25:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:54.786 11:25:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:54.786 11:25:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:54.786 11:25:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:54.786 11:25:25 -- spdk/autotest.sh@32 -- # uname -s 00:01:54.786 11:25:25 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:54.786 11:25:25 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:54.786 11:25:25 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:01:54.786 11:25:25 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:54.786 11:25:25 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:01:54.786 11:25:25 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:54.786 11:25:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:54.786 11:25:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:54.786 11:25:25 -- spdk/autotest.sh@48 -- # udevadm_pid=2839026 00:01:54.786 11:25:25 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:54.786 11:25:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:54.786 11:25:25 -- pm/common@17 -- # local monitor 00:01:54.786 11:25:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:54.786 11:25:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:54.786 11:25:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:54.786 11:25:25 -- pm/common@21 -- # date +%s 00:01:54.786 11:25:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:54.786 11:25:25 -- pm/common@21 -- # date +%s 00:01:54.786 11:25:25 -- pm/common@25 -- # sleep 1 00:01:54.786 11:25:25 -- pm/common@21 -- # date +%s 00:01:54.786 11:25:25 -- pm/common@21 -- # date +%s 00:01:54.786 11:25:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715765125 00:01:54.786 11:25:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715765125 00:01:54.786 11:25:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715765125 00:01:54.786 11:25:25 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715765125 00:01:54.786 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715765125_collect-vmstat.pm.log 00:01:54.786 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715765125_collect-cpu-load.pm.log 00:01:54.786 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715765125_collect-cpu-temp.pm.log 00:01:54.786 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715765125_collect-bmc-pm.bmc.pm.log 00:01:55.718 11:25:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:55.718 11:25:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:55.718 11:25:26 -- common/autotest_common.sh@720 -- # xtrace_disable 00:01:55.718 11:25:26 -- common/autotest_common.sh@10 -- # set +x 00:01:55.718 11:25:26 -- spdk/autotest.sh@59 -- # create_test_list 00:01:55.718 11:25:26 -- common/autotest_common.sh@744 -- # xtrace_disable 00:01:55.718 11:25:26 -- common/autotest_common.sh@10 -- # set +x 00:01:55.718 11:25:26 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:01:55.718 11:25:26 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:55.718 11:25:26 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:55.718 11:25:26 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:55.718 11:25:26 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:55.718 11:25:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:55.718 11:25:26 -- common/autotest_common.sh@1451 -- # uname 00:01:55.718 11:25:26 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:01:55.718 11:25:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:55.718 11:25:26 -- common/autotest_common.sh@1471 -- # uname 00:01:55.718 11:25:26 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:01:55.718 11:25:26 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:55.718 11:25:26 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:55.719 11:25:26 -- spdk/autotest.sh@72 -- # hash lcov 00:01:55.719 11:25:26 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:55.719 11:25:26 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:55.719 --rc lcov_branch_coverage=1 00:01:55.719 --rc lcov_function_coverage=1 00:01:55.719 --rc genhtml_branch_coverage=1 00:01:55.719 --rc genhtml_function_coverage=1 00:01:55.719 --rc genhtml_legend=1 00:01:55.719 --rc geninfo_all_blocks=1 00:01:55.719 ' 00:01:55.719 11:25:26 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:55.719 --rc lcov_branch_coverage=1 00:01:55.719 --rc lcov_function_coverage=1 00:01:55.719 --rc genhtml_branch_coverage=1 00:01:55.719 --rc genhtml_function_coverage=1 00:01:55.719 --rc genhtml_legend=1 00:01:55.719 --rc geninfo_all_blocks=1 00:01:55.719 ' 00:01:55.719 11:25:26 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:55.719 --rc lcov_branch_coverage=1 00:01:55.719 --rc lcov_function_coverage=1 00:01:55.719 --rc genhtml_branch_coverage=1 00:01:55.719 --rc genhtml_function_coverage=1 00:01:55.719 --rc genhtml_legend=1 00:01:55.719 --rc geninfo_all_blocks=1 00:01:55.719 --no-external' 00:01:55.719 11:25:26 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:55.719 --rc lcov_branch_coverage=1 00:01:55.719 --rc lcov_function_coverage=1 00:01:55.719 --rc genhtml_branch_coverage=1 00:01:55.719 --rc genhtml_function_coverage=1 00:01:55.719 --rc genhtml_legend=1 00:01:55.719 --rc geninfo_all_blocks=1 00:01:55.719 --no-external' 00:01:55.719 11:25:26 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:55.977 lcov: LCOV version 1.14 00:01:55.977 11:25:26 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:05.943 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:05.943 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:05.943 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:05.943 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:05.943 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:05.943 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:05.943 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:05.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:18.141 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:18.141 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:18.141 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:18.141 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:18.141 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:18.141 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:18.141 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:18.141 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:18.141 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:18.141 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:18.141 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:18.141 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:18.141 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:18.141 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:18.141 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:18.141 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:18.141 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:18.141 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:18.141 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:18.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:18.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:18.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:18.143 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:18.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:18.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:18.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:18.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:18.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:18.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:18.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:18.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:18.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:18.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:18.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:18.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:18.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:18.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:18.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:18.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:18.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:18.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:18.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:18.401 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:18.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:18.401 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:18.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:18.401 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:18.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:18.401 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:18.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:18.401 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:18.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:18.401 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:18.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:18.401 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:18.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:18.401 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:18.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:18.401 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:18.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:18.703 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:18.703 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:18.703 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:18.703 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:20.094 11:25:50 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:20.094 11:25:50 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:20.094 11:25:50 -- common/autotest_common.sh@10 -- # set +x 00:02:20.094 11:25:50 -- spdk/autotest.sh@91 -- # rm -f 00:02:20.094 11:25:50 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:23.374 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:02:23.374 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:23.374 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:23.374 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:23.374 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:23.374 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:23.374 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:23.374 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:23.374 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:23.374 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:23.374 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:23.374 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:23.374 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:23.374 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:23.374 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:23.374 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:23.374 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:23.374 11:25:53 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:23.374 11:25:53 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:23.374 11:25:53 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:23.374 11:25:53 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:23.374 11:25:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:23.374 11:25:53 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:23.374 11:25:53 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:23.374 11:25:53 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:23.374 11:25:54 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:23.374 11:25:54 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:23.374 11:25:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:23.375 11:25:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:23.375 11:25:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:23.375 11:25:54 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:23.375 11:25:54 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:23.375 No valid GPT data, bailing 00:02:23.375 11:25:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:23.375 11:25:54 -- scripts/common.sh@391 -- # pt= 00:02:23.375 11:25:54 -- scripts/common.sh@392 -- # return 1 00:02:23.375 11:25:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:23.375 1+0 records in 00:02:23.375 1+0 records out 00:02:23.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00611783 s, 171 MB/s 00:02:23.375 11:25:54 -- spdk/autotest.sh@118 -- # sync 00:02:23.375 11:25:54 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:23.375 11:25:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:23.375 11:25:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:28.640 11:25:58 -- spdk/autotest.sh@124 -- # uname -s 00:02:28.640 11:25:58 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:28.640 11:25:58 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:28.640 11:25:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:28.640 11:25:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:28.640 11:25:58 -- common/autotest_common.sh@10 -- # set +x 00:02:28.640 ************************************ 00:02:28.640 START TEST setup.sh 00:02:28.640 ************************************ 00:02:28.640 11:25:58 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:28.640 * Looking for test storage... 00:02:28.640 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:28.641 11:25:58 -- setup/test-setup.sh@10 -- # uname -s 00:02:28.641 11:25:58 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:28.641 11:25:58 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:28.641 11:25:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:28.641 11:25:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:28.641 11:25:58 -- common/autotest_common.sh@10 -- # set +x 00:02:28.641 ************************************ 00:02:28.641 START TEST acl 00:02:28.641 ************************************ 00:02:28.641 11:25:58 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:28.641 * Looking for test storage... 00:02:28.641 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:28.641 11:25:59 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:28.641 11:25:59 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:28.641 11:25:59 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:28.641 11:25:59 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:28.641 11:25:59 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:28.641 11:25:59 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:28.641 11:25:59 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:28.641 11:25:59 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:28.641 11:25:59 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:28.641 11:25:59 -- setup/acl.sh@12 -- # devs=() 00:02:28.641 11:25:59 -- setup/acl.sh@12 -- # declare -a devs 00:02:28.641 11:25:59 -- setup/acl.sh@13 -- # drivers=() 00:02:28.641 11:25:59 -- setup/acl.sh@13 -- # declare -A drivers 00:02:28.641 11:25:59 -- setup/acl.sh@51 -- # setup reset 00:02:28.641 11:25:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:28.641 11:25:59 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:31.926 11:26:02 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:31.926 11:26:02 -- setup/acl.sh@16 -- # local dev driver 00:02:31.926 11:26:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.926 11:26:02 -- setup/acl.sh@15 -- # setup output status 00:02:31.926 11:26:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:31.926 11:26:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:35.211 Hugepages 00:02:35.211 node hugesize free / total 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # continue 00:02:35.211 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # continue 00:02:35.211 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # continue 00:02:35.211 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.211 00:02:35.211 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # continue 00:02:35.211 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # continue 00:02:35.211 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # continue 00:02:35.211 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # continue 00:02:35.211 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # continue 00:02:35.211 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # continue 00:02:35.211 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # continue 00:02:35.211 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # continue 00:02:35.211 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # continue 00:02:35.211 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:35.211 11:26:05 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:35.211 11:26:05 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:35.211 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # continue 00:02:35.211 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # continue 00:02:35.211 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # continue 00:02:35.211 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # continue 00:02:35.211 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.211 11:26:05 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:35.211 11:26:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:35.212 11:26:05 -- setup/acl.sh@20 -- # continue 00:02:35.212 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.212 11:26:05 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:35.212 11:26:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:35.212 11:26:05 -- setup/acl.sh@20 -- # continue 00:02:35.212 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.212 11:26:05 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:35.212 11:26:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:35.212 11:26:05 -- setup/acl.sh@20 -- # continue 00:02:35.212 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.212 11:26:05 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:35.212 11:26:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:35.212 11:26:05 -- setup/acl.sh@20 -- # continue 00:02:35.212 11:26:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.212 11:26:05 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:35.212 11:26:05 -- setup/acl.sh@54 -- # run_test denied denied 00:02:35.212 11:26:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:35.212 11:26:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:35.212 11:26:05 -- common/autotest_common.sh@10 -- # set +x 00:02:35.212 ************************************ 00:02:35.212 START TEST denied 00:02:35.212 ************************************ 00:02:35.212 11:26:05 -- common/autotest_common.sh@1121 -- # denied 00:02:35.212 11:26:05 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5f:00.0' 00:02:35.212 11:26:05 -- setup/acl.sh@38 -- # setup output config 00:02:35.212 11:26:05 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5f:00.0' 00:02:35.212 11:26:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:35.212 11:26:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:02:38.492 0000:5f:00.0 (8086 0a54): Skipping denied controller at 0000:5f:00.0 00:02:38.492 11:26:08 -- setup/acl.sh@40 -- # verify 0000:5f:00.0 00:02:38.492 11:26:08 -- setup/acl.sh@28 -- # local dev driver 00:02:38.492 11:26:08 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:38.492 11:26:08 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5f:00.0 ]] 00:02:38.492 11:26:08 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5f:00.0/driver 00:02:38.492 11:26:08 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:38.492 11:26:08 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:38.492 11:26:08 -- setup/acl.sh@41 -- # setup reset 00:02:38.492 11:26:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:38.492 11:26:08 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:42.678 00:02:42.678 real 0m7.554s 00:02:42.678 user 0m2.395s 00:02:42.678 sys 0m4.462s 00:02:42.678 11:26:13 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:42.678 11:26:13 -- common/autotest_common.sh@10 -- # set +x 00:02:42.678 ************************************ 00:02:42.678 END TEST denied 00:02:42.678 ************************************ 00:02:42.678 11:26:13 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:42.678 11:26:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:42.678 11:26:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:42.678 11:26:13 -- common/autotest_common.sh@10 -- # set +x 00:02:42.678 ************************************ 00:02:42.678 START TEST allowed 00:02:42.678 ************************************ 00:02:42.678 11:26:13 -- common/autotest_common.sh@1121 -- # allowed 00:02:42.678 11:26:13 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5f:00.0 00:02:42.678 11:26:13 -- setup/acl.sh@45 -- # setup output config 00:02:42.678 11:26:13 -- setup/acl.sh@46 -- # grep -E '0000:5f:00.0 .*: nvme -> .*' 00:02:42.678 11:26:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.678 11:26:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:02:50.790 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:02:50.790 11:26:21 -- setup/acl.sh@47 -- # verify 00:02:50.790 11:26:21 -- setup/acl.sh@28 -- # local dev driver 00:02:50.790 11:26:21 -- setup/acl.sh@48 -- # setup reset 00:02:50.790 11:26:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.790 11:26:21 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:54.074 00:02:54.074 real 0m11.002s 00:02:54.074 user 0m1.978s 00:02:54.074 sys 0m4.021s 00:02:54.074 11:26:24 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:54.074 11:26:24 -- common/autotest_common.sh@10 -- # set +x 00:02:54.074 ************************************ 00:02:54.074 END TEST allowed 00:02:54.074 ************************************ 00:02:54.074 00:02:54.074 real 0m25.320s 00:02:54.074 user 0m6.803s 00:02:54.074 sys 0m13.002s 00:02:54.075 11:26:24 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:54.075 11:26:24 -- common/autotest_common.sh@10 -- # set +x 00:02:54.075 ************************************ 00:02:54.075 END TEST acl 00:02:54.075 ************************************ 00:02:54.075 11:26:24 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:02:54.075 11:26:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:54.075 11:26:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:54.075 11:26:24 -- common/autotest_common.sh@10 -- # set +x 00:02:54.075 ************************************ 00:02:54.075 START TEST hugepages 00:02:54.075 ************************************ 00:02:54.075 11:26:24 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:02:54.075 * Looking for test storage... 00:02:54.075 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:54.075 11:26:24 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:54.075 11:26:24 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:54.075 11:26:24 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:54.075 11:26:24 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:54.075 11:26:24 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:54.075 11:26:24 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:54.075 11:26:24 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:54.075 11:26:24 -- setup/common.sh@18 -- # local node= 00:02:54.075 11:26:24 -- setup/common.sh@19 -- # local var val 00:02:54.075 11:26:24 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.075 11:26:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.075 11:26:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.075 11:26:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.075 11:26:24 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.075 11:26:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 71036076 kB' 'MemAvailable: 75533656 kB' 'Buffers: 3728 kB' 'Cached: 14414084 kB' 'SwapCached: 0 kB' 'Active: 10493280 kB' 'Inactive: 4420212 kB' 'Active(anon): 9880868 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499408 kB' 'Mapped: 184404 kB' 'Shmem: 9385188 kB' 'KReclaimable: 237196 kB' 'Slab: 666324 kB' 'SReclaimable: 237196 kB' 'SUnreclaim: 429128 kB' 'KernelStack: 16512 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52438232 kB' 'Committed_AS: 11157496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205588 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.075 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.075 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # continue 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.076 11:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.076 11:26:24 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:54.076 11:26:24 -- setup/common.sh@33 -- # echo 2048 00:02:54.076 11:26:24 -- setup/common.sh@33 -- # return 0 00:02:54.076 11:26:24 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:54.076 11:26:24 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:54.076 11:26:24 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:54.076 11:26:24 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:54.076 11:26:24 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:54.076 11:26:24 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:54.076 11:26:24 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:54.076 11:26:24 -- setup/hugepages.sh@207 -- # get_nodes 00:02:54.076 11:26:24 -- setup/hugepages.sh@27 -- # local node 00:02:54.076 11:26:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.076 11:26:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:54.076 11:26:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.076 11:26:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:54.076 11:26:24 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:54.076 11:26:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:54.076 11:26:24 -- setup/hugepages.sh@208 -- # clear_hp 00:02:54.076 11:26:24 -- setup/hugepages.sh@37 -- # local node hp 00:02:54.076 11:26:24 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:54.076 11:26:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:54.076 11:26:24 -- setup/hugepages.sh@41 -- # echo 0 00:02:54.076 11:26:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:54.076 11:26:24 -- setup/hugepages.sh@41 -- # echo 0 00:02:54.076 11:26:24 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:54.076 11:26:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:54.076 11:26:24 -- setup/hugepages.sh@41 -- # echo 0 00:02:54.076 11:26:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:54.076 11:26:24 -- setup/hugepages.sh@41 -- # echo 0 00:02:54.076 11:26:24 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:54.076 11:26:24 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:54.076 11:26:24 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:54.076 11:26:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:54.076 11:26:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:54.076 11:26:24 -- common/autotest_common.sh@10 -- # set +x 00:02:54.076 ************************************ 00:02:54.076 START TEST default_setup 00:02:54.076 ************************************ 00:02:54.076 11:26:24 -- common/autotest_common.sh@1121 -- # default_setup 00:02:54.076 11:26:24 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:54.076 11:26:24 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:54.076 11:26:24 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:54.076 11:26:24 -- setup/hugepages.sh@51 -- # shift 00:02:54.076 11:26:24 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:54.076 11:26:24 -- setup/hugepages.sh@52 -- # local node_ids 00:02:54.076 11:26:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:54.076 11:26:24 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:54.076 11:26:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:54.076 11:26:24 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:54.076 11:26:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:54.076 11:26:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:54.076 11:26:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:54.076 11:26:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:54.076 11:26:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:54.076 11:26:24 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:54.076 11:26:24 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:54.076 11:26:24 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:54.076 11:26:24 -- setup/hugepages.sh@73 -- # return 0 00:02:54.076 11:26:24 -- setup/hugepages.sh@137 -- # setup output 00:02:54.076 11:26:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.076 11:26:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:56.605 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:56.605 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:56.605 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:56.605 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:56.605 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:56.605 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:56.605 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:56.605 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:56.605 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:56.605 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:56.605 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:56.605 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:56.605 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:56.605 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:56.605 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:56.605 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:01.903 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:01.903 11:26:32 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:01.903 11:26:32 -- setup/hugepages.sh@89 -- # local node 00:03:01.903 11:26:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:01.903 11:26:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:01.903 11:26:32 -- setup/hugepages.sh@92 -- # local surp 00:03:01.903 11:26:32 -- setup/hugepages.sh@93 -- # local resv 00:03:01.903 11:26:32 -- setup/hugepages.sh@94 -- # local anon 00:03:01.903 11:26:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:01.903 11:26:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:01.903 11:26:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:01.903 11:26:32 -- setup/common.sh@18 -- # local node= 00:03:01.903 11:26:32 -- setup/common.sh@19 -- # local var val 00:03:01.903 11:26:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.903 11:26:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.903 11:26:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.903 11:26:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.903 11:26:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.903 11:26:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.903 11:26:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73189244 kB' 'MemAvailable: 77686616 kB' 'Buffers: 3728 kB' 'Cached: 14414208 kB' 'SwapCached: 0 kB' 'Active: 10512108 kB' 'Inactive: 4420212 kB' 'Active(anon): 9899696 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517184 kB' 'Mapped: 184492 kB' 'Shmem: 9385312 kB' 'KReclaimable: 236780 kB' 'Slab: 665024 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428244 kB' 'KernelStack: 16720 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11178264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205796 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.903 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.903 11:26:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.904 11:26:32 -- setup/common.sh@33 -- # echo 0 00:03:01.904 11:26:32 -- setup/common.sh@33 -- # return 0 00:03:01.904 11:26:32 -- setup/hugepages.sh@97 -- # anon=0 00:03:01.904 11:26:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:01.904 11:26:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.904 11:26:32 -- setup/common.sh@18 -- # local node= 00:03:01.904 11:26:32 -- setup/common.sh@19 -- # local var val 00:03:01.904 11:26:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.904 11:26:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.904 11:26:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.904 11:26:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.904 11:26:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.904 11:26:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73187480 kB' 'MemAvailable: 77684852 kB' 'Buffers: 3728 kB' 'Cached: 14414208 kB' 'SwapCached: 0 kB' 'Active: 10515188 kB' 'Inactive: 4420212 kB' 'Active(anon): 9902776 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520252 kB' 'Mapped: 184964 kB' 'Shmem: 9385312 kB' 'KReclaimable: 236780 kB' 'Slab: 665160 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428380 kB' 'KernelStack: 17072 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11180332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205780 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.904 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.904 11:26:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.905 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.905 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.906 11:26:32 -- setup/common.sh@33 -- # echo 0 00:03:01.906 11:26:32 -- setup/common.sh@33 -- # return 0 00:03:01.906 11:26:32 -- setup/hugepages.sh@99 -- # surp=0 00:03:01.906 11:26:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:01.906 11:26:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:01.906 11:26:32 -- setup/common.sh@18 -- # local node= 00:03:01.906 11:26:32 -- setup/common.sh@19 -- # local var val 00:03:01.906 11:26:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.906 11:26:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.906 11:26:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.906 11:26:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.906 11:26:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.906 11:26:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73184980 kB' 'MemAvailable: 77682352 kB' 'Buffers: 3728 kB' 'Cached: 14414212 kB' 'SwapCached: 0 kB' 'Active: 10517436 kB' 'Inactive: 4420212 kB' 'Active(anon): 9905024 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522968 kB' 'Mapped: 184904 kB' 'Shmem: 9385316 kB' 'KReclaimable: 236780 kB' 'Slab: 665152 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428372 kB' 'KernelStack: 17088 kB' 'PageTables: 9296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11183396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205816 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.906 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.906 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.907 11:26:32 -- setup/common.sh@33 -- # echo 0 00:03:01.907 11:26:32 -- setup/common.sh@33 -- # return 0 00:03:01.907 11:26:32 -- setup/hugepages.sh@100 -- # resv=0 00:03:01.907 11:26:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:01.907 nr_hugepages=1024 00:03:01.907 11:26:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:01.907 resv_hugepages=0 00:03:01.907 11:26:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:01.907 surplus_hugepages=0 00:03:01.907 11:26:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:01.907 anon_hugepages=0 00:03:01.907 11:26:32 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.907 11:26:32 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:01.907 11:26:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:01.907 11:26:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:01.907 11:26:32 -- setup/common.sh@18 -- # local node= 00:03:01.907 11:26:32 -- setup/common.sh@19 -- # local var val 00:03:01.907 11:26:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.907 11:26:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.907 11:26:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.907 11:26:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.907 11:26:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.907 11:26:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73191768 kB' 'MemAvailable: 77689140 kB' 'Buffers: 3728 kB' 'Cached: 14414236 kB' 'SwapCached: 0 kB' 'Active: 10511716 kB' 'Inactive: 4420212 kB' 'Active(anon): 9899304 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517272 kB' 'Mapped: 184384 kB' 'Shmem: 9385340 kB' 'KReclaimable: 236780 kB' 'Slab: 665336 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428556 kB' 'KernelStack: 16720 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11177292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205796 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.907 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.907 11:26:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.908 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.908 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.909 11:26:32 -- setup/common.sh@33 -- # echo 1024 00:03:01.909 11:26:32 -- setup/common.sh@33 -- # return 0 00:03:01.909 11:26:32 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.909 11:26:32 -- setup/hugepages.sh@112 -- # get_nodes 00:03:01.909 11:26:32 -- setup/hugepages.sh@27 -- # local node 00:03:01.909 11:26:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.909 11:26:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:01.909 11:26:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.909 11:26:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:01.909 11:26:32 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.909 11:26:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.909 11:26:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.909 11:26:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.909 11:26:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:01.909 11:26:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.909 11:26:32 -- setup/common.sh@18 -- # local node=0 00:03:01.909 11:26:32 -- setup/common.sh@19 -- # local var val 00:03:01.909 11:26:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.909 11:26:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.909 11:26:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:01.909 11:26:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:01.909 11:26:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.909 11:26:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116988 kB' 'MemFree: 33524588 kB' 'MemUsed: 14592400 kB' 'SwapCached: 0 kB' 'Active: 7600720 kB' 'Inactive: 3543144 kB' 'Active(anon): 7408424 kB' 'Inactive(anon): 0 kB' 'Active(file): 192296 kB' 'Inactive(file): 3543144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11005160 kB' 'Mapped: 126472 kB' 'AnonPages: 141948 kB' 'Shmem: 7269720 kB' 'KernelStack: 10056 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132320 kB' 'Slab: 407992 kB' 'SReclaimable: 132320 kB' 'SUnreclaim: 275672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.909 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.909 11:26:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # continue 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.910 11:26:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.910 11:26:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.910 11:26:32 -- setup/common.sh@33 -- # echo 0 00:03:01.910 11:26:32 -- setup/common.sh@33 -- # return 0 00:03:01.910 11:26:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.910 11:26:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.910 11:26:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.910 11:26:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.910 11:26:32 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:01.910 node0=1024 expecting 1024 00:03:01.910 11:26:32 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:01.910 00:03:01.910 real 0m7.777s 00:03:01.910 user 0m1.049s 00:03:01.910 sys 0m1.834s 00:03:01.910 11:26:32 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:01.910 11:26:32 -- common/autotest_common.sh@10 -- # set +x 00:03:01.910 ************************************ 00:03:01.910 END TEST default_setup 00:03:01.910 ************************************ 00:03:01.910 11:26:32 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:01.910 11:26:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:01.910 11:26:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:01.910 11:26:32 -- common/autotest_common.sh@10 -- # set +x 00:03:01.910 ************************************ 00:03:01.910 START TEST per_node_1G_alloc 00:03:01.910 ************************************ 00:03:01.910 11:26:32 -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:03:01.910 11:26:32 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:01.910 11:26:32 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:01.910 11:26:32 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:01.910 11:26:32 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:01.910 11:26:32 -- setup/hugepages.sh@51 -- # shift 00:03:01.910 11:26:32 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:01.910 11:26:32 -- setup/hugepages.sh@52 -- # local node_ids 00:03:01.910 11:26:32 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:01.910 11:26:32 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:01.910 11:26:32 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:01.910 11:26:32 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:01.910 11:26:32 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.910 11:26:32 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:01.910 11:26:32 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.910 11:26:32 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.910 11:26:32 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.910 11:26:32 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:01.910 11:26:32 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:01.910 11:26:32 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:01.910 11:26:32 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:01.910 11:26:32 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:01.910 11:26:32 -- setup/hugepages.sh@73 -- # return 0 00:03:01.910 11:26:32 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:01.910 11:26:32 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:01.910 11:26:32 -- setup/hugepages.sh@146 -- # setup output 00:03:01.910 11:26:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.910 11:26:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:04.452 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:04.452 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:04.452 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:04.452 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:04.452 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:04.452 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:04.452 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:04.452 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:04.452 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:04.452 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:04.452 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:04.452 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:04.452 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:04.452 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:04.452 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:04.452 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:04.452 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:04.452 11:26:35 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:04.452 11:26:35 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:04.452 11:26:35 -- setup/hugepages.sh@89 -- # local node 00:03:04.452 11:26:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:04.452 11:26:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:04.452 11:26:35 -- setup/hugepages.sh@92 -- # local surp 00:03:04.452 11:26:35 -- setup/hugepages.sh@93 -- # local resv 00:03:04.452 11:26:35 -- setup/hugepages.sh@94 -- # local anon 00:03:04.452 11:26:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:04.452 11:26:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:04.452 11:26:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:04.452 11:26:35 -- setup/common.sh@18 -- # local node= 00:03:04.452 11:26:35 -- setup/common.sh@19 -- # local var val 00:03:04.452 11:26:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.452 11:26:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.452 11:26:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.452 11:26:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.452 11:26:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.452 11:26:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.452 11:26:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73199468 kB' 'MemAvailable: 77696840 kB' 'Buffers: 3728 kB' 'Cached: 14414308 kB' 'SwapCached: 0 kB' 'Active: 10513488 kB' 'Inactive: 4420212 kB' 'Active(anon): 9901076 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519036 kB' 'Mapped: 184320 kB' 'Shmem: 9385412 kB' 'KReclaimable: 236780 kB' 'Slab: 665632 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428852 kB' 'KernelStack: 17072 kB' 'PageTables: 9276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11177432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205796 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.452 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.452 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.453 11:26:35 -- setup/common.sh@33 -- # echo 0 00:03:04.453 11:26:35 -- setup/common.sh@33 -- # return 0 00:03:04.453 11:26:35 -- setup/hugepages.sh@97 -- # anon=0 00:03:04.453 11:26:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:04.453 11:26:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.453 11:26:35 -- setup/common.sh@18 -- # local node= 00:03:04.453 11:26:35 -- setup/common.sh@19 -- # local var val 00:03:04.453 11:26:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.453 11:26:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.453 11:26:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.453 11:26:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.453 11:26:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.453 11:26:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73199344 kB' 'MemAvailable: 77696716 kB' 'Buffers: 3728 kB' 'Cached: 14414308 kB' 'SwapCached: 0 kB' 'Active: 10512144 kB' 'Inactive: 4420212 kB' 'Active(anon): 9899732 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517644 kB' 'Mapped: 184340 kB' 'Shmem: 9385412 kB' 'KReclaimable: 236780 kB' 'Slab: 665568 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428788 kB' 'KernelStack: 16592 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11177444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205716 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.453 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.453 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.454 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.454 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.455 11:26:35 -- setup/common.sh@33 -- # echo 0 00:03:04.455 11:26:35 -- setup/common.sh@33 -- # return 0 00:03:04.455 11:26:35 -- setup/hugepages.sh@99 -- # surp=0 00:03:04.455 11:26:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:04.455 11:26:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:04.455 11:26:35 -- setup/common.sh@18 -- # local node= 00:03:04.455 11:26:35 -- setup/common.sh@19 -- # local var val 00:03:04.455 11:26:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.455 11:26:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.455 11:26:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.455 11:26:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.455 11:26:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.455 11:26:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73199172 kB' 'MemAvailable: 77696544 kB' 'Buffers: 3728 kB' 'Cached: 14414320 kB' 'SwapCached: 0 kB' 'Active: 10512160 kB' 'Inactive: 4420212 kB' 'Active(anon): 9899748 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517616 kB' 'Mapped: 184332 kB' 'Shmem: 9385424 kB' 'KReclaimable: 236780 kB' 'Slab: 665568 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428788 kB' 'KernelStack: 16592 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11175044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205684 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.455 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.455 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.456 11:26:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.456 11:26:35 -- setup/common.sh@33 -- # echo 0 00:03:04.456 11:26:35 -- setup/common.sh@33 -- # return 0 00:03:04.456 11:26:35 -- setup/hugepages.sh@100 -- # resv=0 00:03:04.456 11:26:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:04.456 nr_hugepages=1024 00:03:04.456 11:26:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:04.456 resv_hugepages=0 00:03:04.456 11:26:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:04.456 surplus_hugepages=0 00:03:04.456 11:26:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:04.456 anon_hugepages=0 00:03:04.456 11:26:35 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:04.456 11:26:35 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:04.456 11:26:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:04.456 11:26:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:04.456 11:26:35 -- setup/common.sh@18 -- # local node= 00:03:04.456 11:26:35 -- setup/common.sh@19 -- # local var val 00:03:04.456 11:26:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.456 11:26:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.456 11:26:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.456 11:26:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.456 11:26:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.456 11:26:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.456 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73199172 kB' 'MemAvailable: 77696544 kB' 'Buffers: 3728 kB' 'Cached: 14414332 kB' 'SwapCached: 0 kB' 'Active: 10511632 kB' 'Inactive: 4420212 kB' 'Active(anon): 9899220 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517092 kB' 'Mapped: 184332 kB' 'Shmem: 9385436 kB' 'KReclaimable: 236780 kB' 'Slab: 665528 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428748 kB' 'KernelStack: 16576 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11175056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205684 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.457 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.457 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.458 11:26:35 -- setup/common.sh@33 -- # echo 1024 00:03:04.458 11:26:35 -- setup/common.sh@33 -- # return 0 00:03:04.458 11:26:35 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:04.458 11:26:35 -- setup/hugepages.sh@112 -- # get_nodes 00:03:04.458 11:26:35 -- setup/hugepages.sh@27 -- # local node 00:03:04.458 11:26:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.458 11:26:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:04.458 11:26:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.458 11:26:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:04.458 11:26:35 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.458 11:26:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.458 11:26:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.458 11:26:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.458 11:26:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:04.458 11:26:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.458 11:26:35 -- setup/common.sh@18 -- # local node=0 00:03:04.458 11:26:35 -- setup/common.sh@19 -- # local var val 00:03:04.458 11:26:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.458 11:26:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.458 11:26:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:04.458 11:26:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:04.458 11:26:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.458 11:26:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116988 kB' 'MemFree: 34570320 kB' 'MemUsed: 13546668 kB' 'SwapCached: 0 kB' 'Active: 7600156 kB' 'Inactive: 3543144 kB' 'Active(anon): 7407860 kB' 'Inactive(anon): 0 kB' 'Active(file): 192296 kB' 'Inactive(file): 3543144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11005236 kB' 'Mapped: 126468 kB' 'AnonPages: 141220 kB' 'Shmem: 7269796 kB' 'KernelStack: 9944 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132320 kB' 'Slab: 408004 kB' 'SReclaimable: 132320 kB' 'SUnreclaim: 275684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.458 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.458 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@33 -- # echo 0 00:03:04.459 11:26:35 -- setup/common.sh@33 -- # return 0 00:03:04.459 11:26:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.459 11:26:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.459 11:26:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.459 11:26:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:04.459 11:26:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.459 11:26:35 -- setup/common.sh@18 -- # local node=1 00:03:04.459 11:26:35 -- setup/common.sh@19 -- # local var val 00:03:04.459 11:26:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.459 11:26:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.459 11:26:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:04.459 11:26:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:04.459 11:26:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.459 11:26:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176572 kB' 'MemFree: 38628884 kB' 'MemUsed: 5547688 kB' 'SwapCached: 0 kB' 'Active: 2911576 kB' 'Inactive: 877068 kB' 'Active(anon): 2491460 kB' 'Inactive(anon): 0 kB' 'Active(file): 420116 kB' 'Inactive(file): 877068 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3412844 kB' 'Mapped: 57864 kB' 'AnonPages: 375960 kB' 'Shmem: 2115660 kB' 'KernelStack: 6632 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104460 kB' 'Slab: 257524 kB' 'SReclaimable: 104460 kB' 'SUnreclaim: 153064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.459 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.459 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # continue 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.460 11:26:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.460 11:26:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.460 11:26:35 -- setup/common.sh@33 -- # echo 0 00:03:04.460 11:26:35 -- setup/common.sh@33 -- # return 0 00:03:04.460 11:26:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.460 11:26:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.460 11:26:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.460 11:26:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.460 11:26:35 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:04.460 node0=512 expecting 512 00:03:04.460 11:26:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.460 11:26:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.460 11:26:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.460 11:26:35 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:04.460 node1=512 expecting 512 00:03:04.460 11:26:35 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:04.460 00:03:04.460 real 0m2.800s 00:03:04.460 user 0m0.984s 00:03:04.460 sys 0m1.842s 00:03:04.460 11:26:35 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:04.460 11:26:35 -- common/autotest_common.sh@10 -- # set +x 00:03:04.460 ************************************ 00:03:04.460 END TEST per_node_1G_alloc 00:03:04.460 ************************************ 00:03:04.719 11:26:35 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:04.719 11:26:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:04.719 11:26:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:04.719 11:26:35 -- common/autotest_common.sh@10 -- # set +x 00:03:04.719 ************************************ 00:03:04.719 START TEST even_2G_alloc 00:03:04.719 ************************************ 00:03:04.719 11:26:35 -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:03:04.719 11:26:35 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:04.719 11:26:35 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:04.719 11:26:35 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:04.719 11:26:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.719 11:26:35 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:04.719 11:26:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:04.719 11:26:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:04.719 11:26:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.719 11:26:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:04.719 11:26:35 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.719 11:26:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.719 11:26:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.719 11:26:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:04.719 11:26:35 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:04.719 11:26:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:04.719 11:26:35 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:04.719 11:26:35 -- setup/hugepages.sh@83 -- # : 512 00:03:04.719 11:26:35 -- setup/hugepages.sh@84 -- # : 1 00:03:04.719 11:26:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:04.719 11:26:35 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:04.719 11:26:35 -- setup/hugepages.sh@83 -- # : 0 00:03:04.719 11:26:35 -- setup/hugepages.sh@84 -- # : 0 00:03:04.719 11:26:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:04.719 11:26:35 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:04.719 11:26:35 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:04.719 11:26:35 -- setup/hugepages.sh@153 -- # setup output 00:03:04.719 11:26:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.719 11:26:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:07.260 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:07.260 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:07.260 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:07.260 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:07.260 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:07.260 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:07.260 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:07.260 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:07.260 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:07.260 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:07.260 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:07.260 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:07.260 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:07.260 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:07.260 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:07.260 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:07.260 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:07.260 11:26:37 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:07.260 11:26:37 -- setup/hugepages.sh@89 -- # local node 00:03:07.260 11:26:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:07.260 11:26:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:07.260 11:26:37 -- setup/hugepages.sh@92 -- # local surp 00:03:07.260 11:26:37 -- setup/hugepages.sh@93 -- # local resv 00:03:07.260 11:26:37 -- setup/hugepages.sh@94 -- # local anon 00:03:07.260 11:26:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:07.260 11:26:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:07.260 11:26:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:07.260 11:26:37 -- setup/common.sh@18 -- # local node= 00:03:07.260 11:26:37 -- setup/common.sh@19 -- # local var val 00:03:07.260 11:26:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.260 11:26:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.260 11:26:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.260 11:26:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.260 11:26:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.260 11:26:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.260 11:26:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73193780 kB' 'MemAvailable: 77691152 kB' 'Buffers: 3728 kB' 'Cached: 14414408 kB' 'SwapCached: 0 kB' 'Active: 10510620 kB' 'Inactive: 4420212 kB' 'Active(anon): 9898208 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515904 kB' 'Mapped: 183596 kB' 'Shmem: 9385512 kB' 'KReclaimable: 236780 kB' 'Slab: 665140 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428360 kB' 'KernelStack: 16624 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11168456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205732 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.260 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.260 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.261 11:26:37 -- setup/common.sh@33 -- # echo 0 00:03:07.261 11:26:37 -- setup/common.sh@33 -- # return 0 00:03:07.261 11:26:37 -- setup/hugepages.sh@97 -- # anon=0 00:03:07.261 11:26:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:07.261 11:26:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.261 11:26:37 -- setup/common.sh@18 -- # local node= 00:03:07.261 11:26:37 -- setup/common.sh@19 -- # local var val 00:03:07.261 11:26:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.261 11:26:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.261 11:26:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.261 11:26:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.261 11:26:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.261 11:26:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73193772 kB' 'MemAvailable: 77691144 kB' 'Buffers: 3728 kB' 'Cached: 14414420 kB' 'SwapCached: 0 kB' 'Active: 10509852 kB' 'Inactive: 4420212 kB' 'Active(anon): 9897440 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515152 kB' 'Mapped: 183576 kB' 'Shmem: 9385524 kB' 'KReclaimable: 236780 kB' 'Slab: 665124 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428344 kB' 'KernelStack: 16544 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11168468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205700 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.261 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.261 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.262 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.262 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.263 11:26:37 -- setup/common.sh@33 -- # echo 0 00:03:07.263 11:26:37 -- setup/common.sh@33 -- # return 0 00:03:07.263 11:26:37 -- setup/hugepages.sh@99 -- # surp=0 00:03:07.263 11:26:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:07.263 11:26:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:07.263 11:26:37 -- setup/common.sh@18 -- # local node= 00:03:07.263 11:26:37 -- setup/common.sh@19 -- # local var val 00:03:07.263 11:26:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.263 11:26:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.263 11:26:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.263 11:26:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.263 11:26:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.263 11:26:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73193520 kB' 'MemAvailable: 77690892 kB' 'Buffers: 3728 kB' 'Cached: 14414424 kB' 'SwapCached: 0 kB' 'Active: 10510312 kB' 'Inactive: 4420212 kB' 'Active(anon): 9897900 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515620 kB' 'Mapped: 183576 kB' 'Shmem: 9385528 kB' 'KReclaimable: 236780 kB' 'Slab: 665124 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428344 kB' 'KernelStack: 16560 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11168276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205716 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.263 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.263 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.264 11:26:37 -- setup/common.sh@33 -- # echo 0 00:03:07.264 11:26:37 -- setup/common.sh@33 -- # return 0 00:03:07.264 11:26:37 -- setup/hugepages.sh@100 -- # resv=0 00:03:07.264 11:26:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:07.264 nr_hugepages=1024 00:03:07.264 11:26:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:07.264 resv_hugepages=0 00:03:07.264 11:26:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:07.264 surplus_hugepages=0 00:03:07.264 11:26:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:07.264 anon_hugepages=0 00:03:07.264 11:26:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.264 11:26:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:07.264 11:26:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:07.264 11:26:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:07.264 11:26:37 -- setup/common.sh@18 -- # local node= 00:03:07.264 11:26:37 -- setup/common.sh@19 -- # local var val 00:03:07.264 11:26:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.264 11:26:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.264 11:26:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.264 11:26:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.264 11:26:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.264 11:26:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73194916 kB' 'MemAvailable: 77692288 kB' 'Buffers: 3728 kB' 'Cached: 14414436 kB' 'SwapCached: 0 kB' 'Active: 10510324 kB' 'Inactive: 4420212 kB' 'Active(anon): 9897912 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515680 kB' 'Mapped: 183576 kB' 'Shmem: 9385540 kB' 'KReclaimable: 236780 kB' 'Slab: 665116 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428336 kB' 'KernelStack: 16544 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11168496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205684 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.264 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.264 11:26:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:37 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.265 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.265 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.266 11:26:38 -- setup/common.sh@33 -- # echo 1024 00:03:07.266 11:26:38 -- setup/common.sh@33 -- # return 0 00:03:07.266 11:26:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.266 11:26:38 -- setup/hugepages.sh@112 -- # get_nodes 00:03:07.266 11:26:38 -- setup/hugepages.sh@27 -- # local node 00:03:07.266 11:26:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.266 11:26:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:07.266 11:26:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.266 11:26:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:07.266 11:26:38 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:07.266 11:26:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:07.266 11:26:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.266 11:26:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.266 11:26:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:07.266 11:26:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.266 11:26:38 -- setup/common.sh@18 -- # local node=0 00:03:07.266 11:26:38 -- setup/common.sh@19 -- # local var val 00:03:07.266 11:26:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.266 11:26:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.266 11:26:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:07.266 11:26:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:07.266 11:26:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.266 11:26:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116988 kB' 'MemFree: 34574640 kB' 'MemUsed: 13542348 kB' 'SwapCached: 0 kB' 'Active: 7599284 kB' 'Inactive: 3543144 kB' 'Active(anon): 7406988 kB' 'Inactive(anon): 0 kB' 'Active(file): 192296 kB' 'Inactive(file): 3543144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11005300 kB' 'Mapped: 125700 kB' 'AnonPages: 140316 kB' 'Shmem: 7269860 kB' 'KernelStack: 9912 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132320 kB' 'Slab: 407720 kB' 'SReclaimable: 132320 kB' 'SUnreclaim: 275400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.266 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.266 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@33 -- # echo 0 00:03:07.526 11:26:38 -- setup/common.sh@33 -- # return 0 00:03:07.526 11:26:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.526 11:26:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.526 11:26:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.526 11:26:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:07.526 11:26:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.526 11:26:38 -- setup/common.sh@18 -- # local node=1 00:03:07.526 11:26:38 -- setup/common.sh@19 -- # local var val 00:03:07.526 11:26:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.526 11:26:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.526 11:26:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:07.526 11:26:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:07.526 11:26:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.526 11:26:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176572 kB' 'MemFree: 38623352 kB' 'MemUsed: 5553220 kB' 'SwapCached: 0 kB' 'Active: 2910896 kB' 'Inactive: 877068 kB' 'Active(anon): 2490780 kB' 'Inactive(anon): 0 kB' 'Active(file): 420116 kB' 'Inactive(file): 877068 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3412864 kB' 'Mapped: 57876 kB' 'AnonPages: 375200 kB' 'Shmem: 2115680 kB' 'KernelStack: 6632 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104460 kB' 'Slab: 257396 kB' 'SReclaimable: 104460 kB' 'SUnreclaim: 152936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.526 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.526 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # continue 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.527 11:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.527 11:26:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.527 11:26:38 -- setup/common.sh@33 -- # echo 0 00:03:07.527 11:26:38 -- setup/common.sh@33 -- # return 0 00:03:07.527 11:26:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.527 11:26:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.527 11:26:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.527 11:26:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.527 11:26:38 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:07.527 node0=512 expecting 512 00:03:07.527 11:26:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.527 11:26:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.527 11:26:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.527 11:26:38 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:07.527 node1=512 expecting 512 00:03:07.527 11:26:38 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:07.527 00:03:07.527 real 0m2.766s 00:03:07.527 user 0m0.944s 00:03:07.527 sys 0m1.843s 00:03:07.527 11:26:38 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:07.527 11:26:38 -- common/autotest_common.sh@10 -- # set +x 00:03:07.527 ************************************ 00:03:07.527 END TEST even_2G_alloc 00:03:07.527 ************************************ 00:03:07.527 11:26:38 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:07.527 11:26:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:07.527 11:26:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:07.527 11:26:38 -- common/autotest_common.sh@10 -- # set +x 00:03:07.527 ************************************ 00:03:07.527 START TEST odd_alloc 00:03:07.527 ************************************ 00:03:07.527 11:26:38 -- common/autotest_common.sh@1121 -- # odd_alloc 00:03:07.527 11:26:38 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:07.527 11:26:38 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:07.527 11:26:38 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:07.527 11:26:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:07.527 11:26:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:07.527 11:26:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:07.528 11:26:38 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:07.528 11:26:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:07.528 11:26:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:07.528 11:26:38 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:07.528 11:26:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:07.528 11:26:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:07.528 11:26:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:07.528 11:26:38 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:07.528 11:26:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:07.528 11:26:38 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:07.528 11:26:38 -- setup/hugepages.sh@83 -- # : 513 00:03:07.528 11:26:38 -- setup/hugepages.sh@84 -- # : 1 00:03:07.528 11:26:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:07.528 11:26:38 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:07.528 11:26:38 -- setup/hugepages.sh@83 -- # : 0 00:03:07.528 11:26:38 -- setup/hugepages.sh@84 -- # : 0 00:03:07.528 11:26:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:07.528 11:26:38 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:07.528 11:26:38 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:07.528 11:26:38 -- setup/hugepages.sh@160 -- # setup output 00:03:07.528 11:26:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.528 11:26:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:10.071 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:10.071 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:10.071 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:10.071 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:10.071 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:10.071 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:10.071 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:10.071 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:10.071 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:10.071 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:10.071 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:10.071 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:10.071 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:10.071 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:10.071 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:10.071 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:10.071 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:10.071 11:26:40 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:10.071 11:26:40 -- setup/hugepages.sh@89 -- # local node 00:03:10.071 11:26:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.071 11:26:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.071 11:26:40 -- setup/hugepages.sh@92 -- # local surp 00:03:10.071 11:26:40 -- setup/hugepages.sh@93 -- # local resv 00:03:10.071 11:26:40 -- setup/hugepages.sh@94 -- # local anon 00:03:10.071 11:26:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.071 11:26:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.071 11:26:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.071 11:26:40 -- setup/common.sh@18 -- # local node= 00:03:10.071 11:26:40 -- setup/common.sh@19 -- # local var val 00:03:10.071 11:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.071 11:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.071 11:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.071 11:26:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.071 11:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.071 11:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.071 11:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73186000 kB' 'MemAvailable: 77683372 kB' 'Buffers: 3728 kB' 'Cached: 14414516 kB' 'SwapCached: 0 kB' 'Active: 10511856 kB' 'Inactive: 4420212 kB' 'Active(anon): 9899444 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517052 kB' 'Mapped: 183540 kB' 'Shmem: 9385620 kB' 'KReclaimable: 236780 kB' 'Slab: 665364 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428584 kB' 'KernelStack: 17008 kB' 'PageTables: 9408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485784 kB' 'Committed_AS: 11168952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205684 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:10.071 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.071 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.071 11:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.071 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.072 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.072 11:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.072 11:26:40 -- setup/common.sh@33 -- # echo 0 00:03:10.072 11:26:40 -- setup/common.sh@33 -- # return 0 00:03:10.072 11:26:40 -- setup/hugepages.sh@97 -- # anon=0 00:03:10.072 11:26:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.072 11:26:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.072 11:26:40 -- setup/common.sh@18 -- # local node= 00:03:10.072 11:26:40 -- setup/common.sh@19 -- # local var val 00:03:10.072 11:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.073 11:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.073 11:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.073 11:26:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.073 11:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.073 11:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.073 11:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73186508 kB' 'MemAvailable: 77683880 kB' 'Buffers: 3728 kB' 'Cached: 14414520 kB' 'SwapCached: 0 kB' 'Active: 10509572 kB' 'Inactive: 4420212 kB' 'Active(anon): 9897160 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514812 kB' 'Mapped: 183588 kB' 'Shmem: 9385624 kB' 'KReclaimable: 236780 kB' 'Slab: 665256 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428476 kB' 'KernelStack: 16544 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485784 kB' 'Committed_AS: 11168964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205620 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.073 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.073 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.074 11:26:40 -- setup/common.sh@33 -- # echo 0 00:03:10.074 11:26:40 -- setup/common.sh@33 -- # return 0 00:03:10.074 11:26:40 -- setup/hugepages.sh@99 -- # surp=0 00:03:10.074 11:26:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.074 11:26:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.074 11:26:40 -- setup/common.sh@18 -- # local node= 00:03:10.074 11:26:40 -- setup/common.sh@19 -- # local var val 00:03:10.074 11:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.074 11:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.074 11:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.074 11:26:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.074 11:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.074 11:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73186924 kB' 'MemAvailable: 77684296 kB' 'Buffers: 3728 kB' 'Cached: 14414532 kB' 'SwapCached: 0 kB' 'Active: 10509460 kB' 'Inactive: 4420212 kB' 'Active(anon): 9897048 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514688 kB' 'Mapped: 183588 kB' 'Shmem: 9385636 kB' 'KReclaimable: 236780 kB' 'Slab: 665192 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428412 kB' 'KernelStack: 16528 kB' 'PageTables: 7764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485784 kB' 'Committed_AS: 11168980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205620 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.074 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.074 11:26:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.075 11:26:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.075 11:26:40 -- setup/common.sh@33 -- # echo 0 00:03:10.075 11:26:40 -- setup/common.sh@33 -- # return 0 00:03:10.075 11:26:40 -- setup/hugepages.sh@100 -- # resv=0 00:03:10.075 11:26:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:10.075 nr_hugepages=1025 00:03:10.075 11:26:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.075 resv_hugepages=0 00:03:10.075 11:26:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.075 surplus_hugepages=0 00:03:10.075 11:26:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.075 anon_hugepages=0 00:03:10.075 11:26:40 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:10.075 11:26:40 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:10.075 11:26:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.075 11:26:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.075 11:26:40 -- setup/common.sh@18 -- # local node= 00:03:10.075 11:26:40 -- setup/common.sh@19 -- # local var val 00:03:10.075 11:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.075 11:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.075 11:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.075 11:26:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.075 11:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.075 11:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.075 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73186924 kB' 'MemAvailable: 77684296 kB' 'Buffers: 3728 kB' 'Cached: 14414544 kB' 'SwapCached: 0 kB' 'Active: 10509716 kB' 'Inactive: 4420212 kB' 'Active(anon): 9897304 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514948 kB' 'Mapped: 183588 kB' 'Shmem: 9385648 kB' 'KReclaimable: 236780 kB' 'Slab: 665192 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428412 kB' 'KernelStack: 16544 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485784 kB' 'Committed_AS: 11168996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205620 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.076 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.076 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.077 11:26:40 -- setup/common.sh@33 -- # echo 1025 00:03:10.077 11:26:40 -- setup/common.sh@33 -- # return 0 00:03:10.077 11:26:40 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:10.077 11:26:40 -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.077 11:26:40 -- setup/hugepages.sh@27 -- # local node 00:03:10.077 11:26:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.077 11:26:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:10.077 11:26:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.077 11:26:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:10.077 11:26:40 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.077 11:26:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.077 11:26:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.077 11:26:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.077 11:26:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.077 11:26:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.077 11:26:40 -- setup/common.sh@18 -- # local node=0 00:03:10.077 11:26:40 -- setup/common.sh@19 -- # local var val 00:03:10.077 11:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.077 11:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.077 11:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.077 11:26:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.077 11:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.077 11:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116988 kB' 'MemFree: 34560448 kB' 'MemUsed: 13556540 kB' 'SwapCached: 0 kB' 'Active: 7599376 kB' 'Inactive: 3543144 kB' 'Active(anon): 7407080 kB' 'Inactive(anon): 0 kB' 'Active(file): 192296 kB' 'Inactive(file): 3543144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11005396 kB' 'Mapped: 125700 kB' 'AnonPages: 140300 kB' 'Shmem: 7269956 kB' 'KernelStack: 9896 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132320 kB' 'Slab: 407712 kB' 'SReclaimable: 132320 kB' 'SUnreclaim: 275392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.077 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.077 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@33 -- # echo 0 00:03:10.078 11:26:40 -- setup/common.sh@33 -- # return 0 00:03:10.078 11:26:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.078 11:26:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.078 11:26:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.078 11:26:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:10.078 11:26:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.078 11:26:40 -- setup/common.sh@18 -- # local node=1 00:03:10.078 11:26:40 -- setup/common.sh@19 -- # local var val 00:03:10.078 11:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.078 11:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.078 11:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:10.078 11:26:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:10.078 11:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.078 11:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176572 kB' 'MemFree: 38625216 kB' 'MemUsed: 5551356 kB' 'SwapCached: 0 kB' 'Active: 2910324 kB' 'Inactive: 877068 kB' 'Active(anon): 2490208 kB' 'Inactive(anon): 0 kB' 'Active(file): 420116 kB' 'Inactive(file): 877068 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3412892 kB' 'Mapped: 57888 kB' 'AnonPages: 374612 kB' 'Shmem: 2115708 kB' 'KernelStack: 6648 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104460 kB' 'Slab: 257480 kB' 'SReclaimable: 104460 kB' 'SUnreclaim: 153020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.078 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.078 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # continue 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.079 11:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.079 11:26:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.079 11:26:40 -- setup/common.sh@33 -- # echo 0 00:03:10.079 11:26:40 -- setup/common.sh@33 -- # return 0 00:03:10.079 11:26:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.079 11:26:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.079 11:26:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.079 11:26:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.079 11:26:40 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:10.079 node0=512 expecting 513 00:03:10.079 11:26:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.079 11:26:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.079 11:26:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.079 11:26:40 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:10.079 node1=513 expecting 512 00:03:10.079 11:26:40 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:10.079 00:03:10.079 real 0m2.637s 00:03:10.079 user 0m0.881s 00:03:10.079 sys 0m1.727s 00:03:10.079 11:26:40 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:10.079 11:26:40 -- common/autotest_common.sh@10 -- # set +x 00:03:10.079 ************************************ 00:03:10.079 END TEST odd_alloc 00:03:10.079 ************************************ 00:03:10.079 11:26:40 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:10.079 11:26:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:10.079 11:26:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:10.079 11:26:40 -- common/autotest_common.sh@10 -- # set +x 00:03:10.339 ************************************ 00:03:10.339 START TEST custom_alloc 00:03:10.339 ************************************ 00:03:10.339 11:26:40 -- common/autotest_common.sh@1121 -- # custom_alloc 00:03:10.339 11:26:40 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:10.339 11:26:40 -- setup/hugepages.sh@169 -- # local node 00:03:10.339 11:26:40 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:10.339 11:26:40 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:10.339 11:26:40 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:10.339 11:26:40 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:10.339 11:26:40 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:10.339 11:26:40 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:10.339 11:26:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.339 11:26:40 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:10.339 11:26:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:10.339 11:26:40 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:10.339 11:26:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.339 11:26:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:10.339 11:26:40 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:10.339 11:26:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.339 11:26:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.339 11:26:40 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:10.339 11:26:40 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:10.339 11:26:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.339 11:26:40 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:10.339 11:26:40 -- setup/hugepages.sh@83 -- # : 256 00:03:10.339 11:26:40 -- setup/hugepages.sh@84 -- # : 1 00:03:10.339 11:26:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.339 11:26:40 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:10.339 11:26:40 -- setup/hugepages.sh@83 -- # : 0 00:03:10.339 11:26:40 -- setup/hugepages.sh@84 -- # : 0 00:03:10.339 11:26:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.339 11:26:40 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:10.339 11:26:40 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:10.339 11:26:40 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:10.339 11:26:40 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:10.339 11:26:40 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:10.339 11:26:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.339 11:26:40 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:10.339 11:26:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:10.339 11:26:40 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:10.339 11:26:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.339 11:26:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:10.339 11:26:40 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:10.339 11:26:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.339 11:26:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.339 11:26:40 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:10.339 11:26:40 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:10.339 11:26:40 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:10.339 11:26:40 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:10.339 11:26:40 -- setup/hugepages.sh@78 -- # return 0 00:03:10.339 11:26:40 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:10.339 11:26:40 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:10.339 11:26:40 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:10.339 11:26:40 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:10.339 11:26:40 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:10.339 11:26:40 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:10.339 11:26:40 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:10.339 11:26:40 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:10.339 11:26:40 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:10.339 11:26:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.339 11:26:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:10.339 11:26:40 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:10.339 11:26:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.339 11:26:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.339 11:26:40 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:10.339 11:26:40 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:10.339 11:26:40 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:10.339 11:26:40 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:10.339 11:26:40 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:10.339 11:26:40 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:10.339 11:26:40 -- setup/hugepages.sh@78 -- # return 0 00:03:10.339 11:26:40 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:10.339 11:26:40 -- setup/hugepages.sh@187 -- # setup output 00:03:10.339 11:26:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.339 11:26:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:12.880 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.880 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:12.880 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.880 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.880 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.880 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.880 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.880 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.880 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.880 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.880 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.880 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.880 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.880 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.880 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.880 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.880 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.880 11:26:43 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:12.880 11:26:43 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:12.880 11:26:43 -- setup/hugepages.sh@89 -- # local node 00:03:12.880 11:26:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.880 11:26:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.880 11:26:43 -- setup/hugepages.sh@92 -- # local surp 00:03:12.880 11:26:43 -- setup/hugepages.sh@93 -- # local resv 00:03:12.880 11:26:43 -- setup/hugepages.sh@94 -- # local anon 00:03:12.880 11:26:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.880 11:26:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.880 11:26:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.880 11:26:43 -- setup/common.sh@18 -- # local node= 00:03:12.880 11:26:43 -- setup/common.sh@19 -- # local var val 00:03:12.880 11:26:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.880 11:26:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.880 11:26:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.880 11:26:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.880 11:26:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.880 11:26:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.880 11:26:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 72147884 kB' 'MemAvailable: 76645256 kB' 'Buffers: 3728 kB' 'Cached: 14414628 kB' 'SwapCached: 0 kB' 'Active: 10511316 kB' 'Inactive: 4420212 kB' 'Active(anon): 9898904 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516388 kB' 'Mapped: 183772 kB' 'Shmem: 9385732 kB' 'KReclaimable: 236780 kB' 'Slab: 665664 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428884 kB' 'KernelStack: 16640 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962520 kB' 'Committed_AS: 11169160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205636 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.880 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.880 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.881 11:26:43 -- setup/common.sh@33 -- # echo 0 00:03:12.881 11:26:43 -- setup/common.sh@33 -- # return 0 00:03:12.881 11:26:43 -- setup/hugepages.sh@97 -- # anon=0 00:03:12.881 11:26:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.881 11:26:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.881 11:26:43 -- setup/common.sh@18 -- # local node= 00:03:12.881 11:26:43 -- setup/common.sh@19 -- # local var val 00:03:12.881 11:26:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.881 11:26:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.881 11:26:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.881 11:26:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.881 11:26:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.881 11:26:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 72148096 kB' 'MemAvailable: 76645468 kB' 'Buffers: 3728 kB' 'Cached: 14414628 kB' 'SwapCached: 0 kB' 'Active: 10511128 kB' 'Inactive: 4420212 kB' 'Active(anon): 9898716 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516192 kB' 'Mapped: 183680 kB' 'Shmem: 9385732 kB' 'KReclaimable: 236780 kB' 'Slab: 665604 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428824 kB' 'KernelStack: 16640 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962520 kB' 'Committed_AS: 11169172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205620 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.881 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.881 11:26:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.882 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.882 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.883 11:26:43 -- setup/common.sh@33 -- # echo 0 00:03:12.883 11:26:43 -- setup/common.sh@33 -- # return 0 00:03:12.883 11:26:43 -- setup/hugepages.sh@99 -- # surp=0 00:03:12.883 11:26:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.883 11:26:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.883 11:26:43 -- setup/common.sh@18 -- # local node= 00:03:12.883 11:26:43 -- setup/common.sh@19 -- # local var val 00:03:12.883 11:26:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.883 11:26:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.883 11:26:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.883 11:26:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.883 11:26:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.883 11:26:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 72148356 kB' 'MemAvailable: 76645728 kB' 'Buffers: 3728 kB' 'Cached: 14414644 kB' 'SwapCached: 0 kB' 'Active: 10510844 kB' 'Inactive: 4420212 kB' 'Active(anon): 9898432 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515876 kB' 'Mapped: 183680 kB' 'Shmem: 9385748 kB' 'KReclaimable: 236780 kB' 'Slab: 665604 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428824 kB' 'KernelStack: 16624 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962520 kB' 'Committed_AS: 11169188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205620 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.883 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.883 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.884 11:26:43 -- setup/common.sh@33 -- # echo 0 00:03:12.884 11:26:43 -- setup/common.sh@33 -- # return 0 00:03:12.884 11:26:43 -- setup/hugepages.sh@100 -- # resv=0 00:03:12.884 11:26:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:12.884 nr_hugepages=1536 00:03:12.884 11:26:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.884 resv_hugepages=0 00:03:12.884 11:26:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.884 surplus_hugepages=0 00:03:12.884 11:26:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.884 anon_hugepages=0 00:03:12.884 11:26:43 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:12.884 11:26:43 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:12.884 11:26:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.884 11:26:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.884 11:26:43 -- setup/common.sh@18 -- # local node= 00:03:12.884 11:26:43 -- setup/common.sh@19 -- # local var val 00:03:12.884 11:26:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.884 11:26:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.884 11:26:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.884 11:26:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.884 11:26:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.884 11:26:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 72148356 kB' 'MemAvailable: 76645728 kB' 'Buffers: 3728 kB' 'Cached: 14414656 kB' 'SwapCached: 0 kB' 'Active: 10510804 kB' 'Inactive: 4420212 kB' 'Active(anon): 9898392 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515880 kB' 'Mapped: 183680 kB' 'Shmem: 9385760 kB' 'KReclaimable: 236780 kB' 'Slab: 665604 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428824 kB' 'KernelStack: 16624 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962520 kB' 'Committed_AS: 11169200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205636 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.884 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.884 11:26:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.885 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.885 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.886 11:26:43 -- setup/common.sh@33 -- # echo 1536 00:03:12.886 11:26:43 -- setup/common.sh@33 -- # return 0 00:03:12.886 11:26:43 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:12.886 11:26:43 -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.886 11:26:43 -- setup/hugepages.sh@27 -- # local node 00:03:12.886 11:26:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.886 11:26:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:12.886 11:26:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.886 11:26:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:12.886 11:26:43 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.886 11:26:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.886 11:26:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.886 11:26:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.886 11:26:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.886 11:26:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.886 11:26:43 -- setup/common.sh@18 -- # local node=0 00:03:12.886 11:26:43 -- setup/common.sh@19 -- # local var val 00:03:12.886 11:26:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.886 11:26:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.886 11:26:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.886 11:26:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.886 11:26:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.886 11:26:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116988 kB' 'MemFree: 34574492 kB' 'MemUsed: 13542496 kB' 'SwapCached: 0 kB' 'Active: 7600152 kB' 'Inactive: 3543144 kB' 'Active(anon): 7407856 kB' 'Inactive(anon): 0 kB' 'Active(file): 192296 kB' 'Inactive(file): 3543144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11005476 kB' 'Mapped: 125724 kB' 'AnonPages: 140932 kB' 'Shmem: 7270036 kB' 'KernelStack: 9944 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132320 kB' 'Slab: 408044 kB' 'SReclaimable: 132320 kB' 'SUnreclaim: 275724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.886 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.886 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.887 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.887 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@33 -- # echo 0 00:03:12.888 11:26:43 -- setup/common.sh@33 -- # return 0 00:03:12.888 11:26:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.888 11:26:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.888 11:26:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.888 11:26:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:12.888 11:26:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.888 11:26:43 -- setup/common.sh@18 -- # local node=1 00:03:12.888 11:26:43 -- setup/common.sh@19 -- # local var val 00:03:12.888 11:26:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.888 11:26:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.888 11:26:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:12.888 11:26:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:12.888 11:26:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.888 11:26:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176572 kB' 'MemFree: 37575472 kB' 'MemUsed: 6601100 kB' 'SwapCached: 0 kB' 'Active: 2910592 kB' 'Inactive: 877068 kB' 'Active(anon): 2490476 kB' 'Inactive(anon): 0 kB' 'Active(file): 420116 kB' 'Inactive(file): 877068 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3412920 kB' 'Mapped: 57956 kB' 'AnonPages: 374856 kB' 'Shmem: 2115736 kB' 'KernelStack: 6664 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104460 kB' 'Slab: 257560 kB' 'SReclaimable: 104460 kB' 'SUnreclaim: 153100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.888 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.888 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # continue 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.889 11:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.889 11:26:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.889 11:26:43 -- setup/common.sh@33 -- # echo 0 00:03:12.889 11:26:43 -- setup/common.sh@33 -- # return 0 00:03:12.889 11:26:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.889 11:26:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.889 11:26:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.889 11:26:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.889 11:26:43 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:12.889 node0=512 expecting 512 00:03:12.889 11:26:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.889 11:26:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.889 11:26:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.889 11:26:43 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:12.889 node1=1024 expecting 1024 00:03:12.889 11:26:43 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:12.889 00:03:12.889 real 0m2.688s 00:03:12.889 user 0m0.969s 00:03:12.889 sys 0m1.706s 00:03:12.889 11:26:43 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:12.889 11:26:43 -- common/autotest_common.sh@10 -- # set +x 00:03:12.890 ************************************ 00:03:12.890 END TEST custom_alloc 00:03:12.890 ************************************ 00:03:12.890 11:26:43 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:12.890 11:26:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:12.890 11:26:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:12.890 11:26:43 -- common/autotest_common.sh@10 -- # set +x 00:03:12.890 ************************************ 00:03:12.890 START TEST no_shrink_alloc 00:03:12.890 ************************************ 00:03:12.890 11:26:43 -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:03:12.890 11:26:43 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:12.890 11:26:43 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:12.890 11:26:43 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:12.890 11:26:43 -- setup/hugepages.sh@51 -- # shift 00:03:12.890 11:26:43 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:12.890 11:26:43 -- setup/hugepages.sh@52 -- # local node_ids 00:03:12.890 11:26:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.890 11:26:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:12.890 11:26:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:12.890 11:26:43 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:12.890 11:26:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.890 11:26:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:12.890 11:26:43 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.890 11:26:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.890 11:26:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.890 11:26:43 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:12.890 11:26:43 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.890 11:26:43 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:12.890 11:26:43 -- setup/hugepages.sh@73 -- # return 0 00:03:12.890 11:26:43 -- setup/hugepages.sh@198 -- # setup output 00:03:12.890 11:26:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.890 11:26:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:15.430 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.430 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:15.430 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.430 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.430 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.430 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.430 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.430 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.430 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.430 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.430 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.430 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.430 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.430 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.430 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.430 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.430 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.430 11:26:46 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:15.430 11:26:46 -- setup/hugepages.sh@89 -- # local node 00:03:15.430 11:26:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.430 11:26:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.430 11:26:46 -- setup/hugepages.sh@92 -- # local surp 00:03:15.430 11:26:46 -- setup/hugepages.sh@93 -- # local resv 00:03:15.430 11:26:46 -- setup/hugepages.sh@94 -- # local anon 00:03:15.430 11:26:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.430 11:26:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.430 11:26:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.430 11:26:46 -- setup/common.sh@18 -- # local node= 00:03:15.430 11:26:46 -- setup/common.sh@19 -- # local var val 00:03:15.430 11:26:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.430 11:26:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.430 11:26:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.430 11:26:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.430 11:26:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.430 11:26:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73179620 kB' 'MemAvailable: 77676992 kB' 'Buffers: 3728 kB' 'Cached: 14414724 kB' 'SwapCached: 0 kB' 'Active: 10512560 kB' 'Inactive: 4420212 kB' 'Active(anon): 9900148 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517440 kB' 'Mapped: 183632 kB' 'Shmem: 9385828 kB' 'KReclaimable: 236780 kB' 'Slab: 665764 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428984 kB' 'KernelStack: 16736 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11169312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205700 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.430 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.430 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.431 11:26:46 -- setup/common.sh@33 -- # echo 0 00:03:15.431 11:26:46 -- setup/common.sh@33 -- # return 0 00:03:15.431 11:26:46 -- setup/hugepages.sh@97 -- # anon=0 00:03:15.431 11:26:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.431 11:26:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.431 11:26:46 -- setup/common.sh@18 -- # local node= 00:03:15.431 11:26:46 -- setup/common.sh@19 -- # local var val 00:03:15.431 11:26:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.431 11:26:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.431 11:26:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.431 11:26:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.431 11:26:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.431 11:26:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73182784 kB' 'MemAvailable: 77680156 kB' 'Buffers: 3728 kB' 'Cached: 14414728 kB' 'SwapCached: 0 kB' 'Active: 10511216 kB' 'Inactive: 4420212 kB' 'Active(anon): 9898804 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516044 kB' 'Mapped: 183608 kB' 'Shmem: 9385832 kB' 'KReclaimable: 236780 kB' 'Slab: 665568 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428788 kB' 'KernelStack: 16512 kB' 'PageTables: 7460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11169328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205636 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.431 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.431 11:26:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.432 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.432 11:26:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.432 11:26:46 -- setup/common.sh@33 -- # echo 0 00:03:15.432 11:26:46 -- setup/common.sh@33 -- # return 0 00:03:15.432 11:26:46 -- setup/hugepages.sh@99 -- # surp=0 00:03:15.432 11:26:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.432 11:26:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.432 11:26:46 -- setup/common.sh@18 -- # local node= 00:03:15.432 11:26:46 -- setup/common.sh@19 -- # local var val 00:03:15.432 11:26:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.432 11:26:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.433 11:26:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.433 11:26:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.433 11:26:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.433 11:26:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73184100 kB' 'MemAvailable: 77681472 kB' 'Buffers: 3728 kB' 'Cached: 14414740 kB' 'SwapCached: 0 kB' 'Active: 10510896 kB' 'Inactive: 4420212 kB' 'Active(anon): 9898484 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515776 kB' 'Mapped: 183608 kB' 'Shmem: 9385844 kB' 'KReclaimable: 236780 kB' 'Slab: 665636 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428856 kB' 'KernelStack: 16528 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11169348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205636 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.433 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.433 11:26:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.434 11:26:46 -- setup/common.sh@33 -- # echo 0 00:03:15.434 11:26:46 -- setup/common.sh@33 -- # return 0 00:03:15.434 11:26:46 -- setup/hugepages.sh@100 -- # resv=0 00:03:15.434 11:26:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.434 nr_hugepages=1024 00:03:15.434 11:26:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.434 resv_hugepages=0 00:03:15.434 11:26:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.434 surplus_hugepages=0 00:03:15.434 11:26:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.434 anon_hugepages=0 00:03:15.434 11:26:46 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.434 11:26:46 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.434 11:26:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.434 11:26:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.434 11:26:46 -- setup/common.sh@18 -- # local node= 00:03:15.434 11:26:46 -- setup/common.sh@19 -- # local var val 00:03:15.434 11:26:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.434 11:26:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.434 11:26:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.434 11:26:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.434 11:26:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.434 11:26:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73184368 kB' 'MemAvailable: 77681740 kB' 'Buffers: 3728 kB' 'Cached: 14414760 kB' 'SwapCached: 0 kB' 'Active: 10511548 kB' 'Inactive: 4420212 kB' 'Active(anon): 9899136 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516544 kB' 'Mapped: 183608 kB' 'Shmem: 9385864 kB' 'KReclaimable: 236780 kB' 'Slab: 665612 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428832 kB' 'KernelStack: 16608 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11169860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205668 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.434 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.434 11:26:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.435 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.435 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.695 11:26:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.695 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.695 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.695 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.695 11:26:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.695 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.695 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.695 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.695 11:26:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.695 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.695 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.695 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.696 11:26:46 -- setup/common.sh@33 -- # echo 1024 00:03:15.696 11:26:46 -- setup/common.sh@33 -- # return 0 00:03:15.696 11:26:46 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.696 11:26:46 -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.696 11:26:46 -- setup/hugepages.sh@27 -- # local node 00:03:15.696 11:26:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.696 11:26:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:15.696 11:26:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.696 11:26:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:15.696 11:26:46 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.696 11:26:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.696 11:26:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.696 11:26:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.696 11:26:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.696 11:26:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.696 11:26:46 -- setup/common.sh@18 -- # local node=0 00:03:15.696 11:26:46 -- setup/common.sh@19 -- # local var val 00:03:15.696 11:26:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.696 11:26:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.696 11:26:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.696 11:26:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.696 11:26:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.696 11:26:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116988 kB' 'MemFree: 33514424 kB' 'MemUsed: 14602564 kB' 'SwapCached: 0 kB' 'Active: 7600388 kB' 'Inactive: 3543144 kB' 'Active(anon): 7408092 kB' 'Inactive(anon): 0 kB' 'Active(file): 192296 kB' 'Inactive(file): 3543144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11005560 kB' 'Mapped: 125700 kB' 'AnonPages: 141116 kB' 'Shmem: 7270120 kB' 'KernelStack: 9928 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132320 kB' 'Slab: 407992 kB' 'SReclaimable: 132320 kB' 'SUnreclaim: 275672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.696 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.696 11:26:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # continue 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.697 11:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.697 11:26:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.697 11:26:46 -- setup/common.sh@33 -- # echo 0 00:03:15.697 11:26:46 -- setup/common.sh@33 -- # return 0 00:03:15.697 11:26:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.697 11:26:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.697 11:26:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.697 11:26:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.697 11:26:46 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:15.697 node0=1024 expecting 1024 00:03:15.697 11:26:46 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:15.697 11:26:46 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:15.697 11:26:46 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:15.697 11:26:46 -- setup/hugepages.sh@202 -- # setup output 00:03:15.697 11:26:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.697 11:26:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:18.236 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.236 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.236 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.236 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.236 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.236 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.236 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.236 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.236 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.236 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.236 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.236 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.236 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.236 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.236 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.236 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.236 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.236 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:18.236 11:26:48 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:18.236 11:26:48 -- setup/hugepages.sh@89 -- # local node 00:03:18.236 11:26:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.236 11:26:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.236 11:26:48 -- setup/hugepages.sh@92 -- # local surp 00:03:18.236 11:26:48 -- setup/hugepages.sh@93 -- # local resv 00:03:18.236 11:26:48 -- setup/hugepages.sh@94 -- # local anon 00:03:18.236 11:26:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.236 11:26:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.236 11:26:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.236 11:26:48 -- setup/common.sh@18 -- # local node= 00:03:18.236 11:26:48 -- setup/common.sh@19 -- # local var val 00:03:18.236 11:26:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.236 11:26:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.236 11:26:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.236 11:26:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.236 11:26:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.236 11:26:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.236 11:26:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73202120 kB' 'MemAvailable: 77699492 kB' 'Buffers: 3728 kB' 'Cached: 14414820 kB' 'SwapCached: 0 kB' 'Active: 10512524 kB' 'Inactive: 4420212 kB' 'Active(anon): 9900112 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517484 kB' 'Mapped: 183620 kB' 'Shmem: 9385924 kB' 'KReclaimable: 236780 kB' 'Slab: 665012 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428232 kB' 'KernelStack: 16608 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11169724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205716 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.236 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.236 11:26:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.237 11:26:48 -- setup/common.sh@33 -- # echo 0 00:03:18.237 11:26:48 -- setup/common.sh@33 -- # return 0 00:03:18.237 11:26:48 -- setup/hugepages.sh@97 -- # anon=0 00:03:18.237 11:26:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.237 11:26:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.237 11:26:48 -- setup/common.sh@18 -- # local node= 00:03:18.237 11:26:48 -- setup/common.sh@19 -- # local var val 00:03:18.237 11:26:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.237 11:26:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.237 11:26:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.237 11:26:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.237 11:26:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.237 11:26:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73203876 kB' 'MemAvailable: 77701248 kB' 'Buffers: 3728 kB' 'Cached: 14414828 kB' 'SwapCached: 0 kB' 'Active: 10512004 kB' 'Inactive: 4420212 kB' 'Active(anon): 9899592 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517004 kB' 'Mapped: 183616 kB' 'Shmem: 9385932 kB' 'KReclaimable: 236780 kB' 'Slab: 665012 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428232 kB' 'KernelStack: 16528 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11169736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205684 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.237 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.237 11:26:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.238 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.238 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.239 11:26:48 -- setup/common.sh@33 -- # echo 0 00:03:18.239 11:26:48 -- setup/common.sh@33 -- # return 0 00:03:18.239 11:26:48 -- setup/hugepages.sh@99 -- # surp=0 00:03:18.239 11:26:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.239 11:26:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.239 11:26:48 -- setup/common.sh@18 -- # local node= 00:03:18.239 11:26:48 -- setup/common.sh@19 -- # local var val 00:03:18.239 11:26:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.239 11:26:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.239 11:26:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.239 11:26:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.239 11:26:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.239 11:26:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73205416 kB' 'MemAvailable: 77702788 kB' 'Buffers: 3728 kB' 'Cached: 14414836 kB' 'SwapCached: 0 kB' 'Active: 10511336 kB' 'Inactive: 4420212 kB' 'Active(anon): 9898924 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516332 kB' 'Mapped: 183616 kB' 'Shmem: 9385940 kB' 'KReclaimable: 236780 kB' 'Slab: 665024 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428244 kB' 'KernelStack: 16544 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11169752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205700 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.239 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.239 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.240 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.240 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.501 11:26:48 -- setup/common.sh@32 -- # continue 00:03:18.501 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.501 11:26:48 -- setup/common.sh@33 -- # echo 0 00:03:18.501 11:26:48 -- setup/common.sh@33 -- # return 0 00:03:18.501 11:26:48 -- setup/hugepages.sh@100 -- # resv=0 00:03:18.501 11:26:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.501 nr_hugepages=1024 00:03:18.501 11:26:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.501 resv_hugepages=0 00:03:18.501 11:26:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.501 surplus_hugepages=0 00:03:18.501 11:26:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.501 anon_hugepages=0 00:03:18.501 11:26:48 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.501 11:26:48 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.501 11:26:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.501 11:26:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.501 11:26:48 -- setup/common.sh@18 -- # local node= 00:03:18.501 11:26:48 -- setup/common.sh@19 -- # local var val 00:03:18.501 11:26:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.501 11:26:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.501 11:26:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.501 11:26:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.501 11:26:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.501 11:26:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.501 11:26:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73207068 kB' 'MemAvailable: 77704440 kB' 'Buffers: 3728 kB' 'Cached: 14414848 kB' 'SwapCached: 0 kB' 'Active: 10511672 kB' 'Inactive: 4420212 kB' 'Active(anon): 9899260 kB' 'Inactive(anon): 0 kB' 'Active(file): 612412 kB' 'Inactive(file): 4420212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516616 kB' 'Mapped: 183616 kB' 'Shmem: 9385952 kB' 'KReclaimable: 236780 kB' 'Slab: 665004 kB' 'SReclaimable: 236780 kB' 'SUnreclaim: 428224 kB' 'KernelStack: 16544 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11169768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205700 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844692 kB' 'DirectMap2M: 25094144 kB' 'DirectMap1G: 74448896 kB' 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.501 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.501 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.502 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.502 11:26:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.502 11:26:49 -- setup/common.sh@33 -- # echo 1024 00:03:18.502 11:26:49 -- setup/common.sh@33 -- # return 0 00:03:18.502 11:26:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.502 11:26:49 -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.502 11:26:49 -- setup/hugepages.sh@27 -- # local node 00:03:18.502 11:26:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.503 11:26:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:18.503 11:26:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.503 11:26:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:18.503 11:26:49 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.503 11:26:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.503 11:26:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.503 11:26:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.503 11:26:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.503 11:26:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.503 11:26:49 -- setup/common.sh@18 -- # local node=0 00:03:18.503 11:26:49 -- setup/common.sh@19 -- # local var val 00:03:18.503 11:26:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.503 11:26:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.503 11:26:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.503 11:26:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.503 11:26:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.503 11:26:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116988 kB' 'MemFree: 33535616 kB' 'MemUsed: 14581372 kB' 'SwapCached: 0 kB' 'Active: 7600256 kB' 'Inactive: 3543144 kB' 'Active(anon): 7407960 kB' 'Inactive(anon): 0 kB' 'Active(file): 192296 kB' 'Inactive(file): 3543144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11005628 kB' 'Mapped: 125700 kB' 'AnonPages: 140924 kB' 'Shmem: 7270188 kB' 'KernelStack: 9896 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132320 kB' 'Slab: 407712 kB' 'SReclaimable: 132320 kB' 'SUnreclaim: 275392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.503 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.503 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # continue 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.504 11:26:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.504 11:26:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.504 11:26:49 -- setup/common.sh@33 -- # echo 0 00:03:18.504 11:26:49 -- setup/common.sh@33 -- # return 0 00:03:18.504 11:26:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.504 11:26:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.504 11:26:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.504 11:26:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.504 11:26:49 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:18.504 node0=1024 expecting 1024 00:03:18.504 11:26:49 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:18.504 00:03:18.504 real 0m5.455s 00:03:18.504 user 0m1.885s 00:03:18.504 sys 0m3.561s 00:03:18.504 11:26:49 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:18.504 11:26:49 -- common/autotest_common.sh@10 -- # set +x 00:03:18.504 ************************************ 00:03:18.504 END TEST no_shrink_alloc 00:03:18.504 ************************************ 00:03:18.504 11:26:49 -- setup/hugepages.sh@217 -- # clear_hp 00:03:18.504 11:26:49 -- setup/hugepages.sh@37 -- # local node hp 00:03:18.504 11:26:49 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.504 11:26:49 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.504 11:26:49 -- setup/hugepages.sh@41 -- # echo 0 00:03:18.504 11:26:49 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.504 11:26:49 -- setup/hugepages.sh@41 -- # echo 0 00:03:18.504 11:26:49 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.504 11:26:49 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.504 11:26:49 -- setup/hugepages.sh@41 -- # echo 0 00:03:18.504 11:26:49 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.504 11:26:49 -- setup/hugepages.sh@41 -- # echo 0 00:03:18.504 11:26:49 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:18.504 11:26:49 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:18.504 00:03:18.504 real 0m24.730s 00:03:18.504 user 0m6.955s 00:03:18.504 sys 0m12.892s 00:03:18.504 11:26:49 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:18.504 11:26:49 -- common/autotest_common.sh@10 -- # set +x 00:03:18.504 ************************************ 00:03:18.504 END TEST hugepages 00:03:18.504 ************************************ 00:03:18.504 11:26:49 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:18.504 11:26:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:18.504 11:26:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:18.504 11:26:49 -- common/autotest_common.sh@10 -- # set +x 00:03:18.504 ************************************ 00:03:18.504 START TEST driver 00:03:18.504 ************************************ 00:03:18.504 11:26:49 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:18.763 * Looking for test storage... 00:03:18.763 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:18.763 11:26:49 -- setup/driver.sh@68 -- # setup reset 00:03:18.763 11:26:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.763 11:26:49 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.956 11:26:53 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:22.956 11:26:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:22.956 11:26:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:22.956 11:26:53 -- common/autotest_common.sh@10 -- # set +x 00:03:22.956 ************************************ 00:03:22.956 START TEST guess_driver 00:03:22.956 ************************************ 00:03:22.956 11:26:53 -- common/autotest_common.sh@1121 -- # guess_driver 00:03:22.956 11:26:53 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:22.956 11:26:53 -- setup/driver.sh@47 -- # local fail=0 00:03:22.956 11:26:53 -- setup/driver.sh@49 -- # pick_driver 00:03:22.956 11:26:53 -- setup/driver.sh@36 -- # vfio 00:03:22.956 11:26:53 -- setup/driver.sh@21 -- # local iommu_grups 00:03:22.956 11:26:53 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:22.956 11:26:53 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:22.956 11:26:53 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:22.956 11:26:53 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:22.956 11:26:53 -- setup/driver.sh@29 -- # (( 163 > 0 )) 00:03:22.956 11:26:53 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:22.956 11:26:53 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:22.956 11:26:53 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:22.956 11:26:53 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:22.956 11:26:53 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:22.956 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:22.956 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:22.956 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:22.956 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:22.956 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:22.956 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:22.956 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:22.956 11:26:53 -- setup/driver.sh@30 -- # return 0 00:03:22.956 11:26:53 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:22.956 11:26:53 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:22.956 11:26:53 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:22.956 11:26:53 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:22.956 Looking for driver=vfio-pci 00:03:22.956 11:26:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.956 11:26:53 -- setup/driver.sh@45 -- # setup output config 00:03:22.956 11:26:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.956 11:26:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:26.249 11:26:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.249 11:26:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.249 11:26:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.249 11:26:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.249 11:26:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.249 11:26:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.249 11:26:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.249 11:26:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.249 11:26:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.249 11:26:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.249 11:26:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.250 11:26:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.250 11:26:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.250 11:26:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.250 11:26:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.250 11:26:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.250 11:26:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.250 11:26:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.250 11:26:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.250 11:26:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.250 11:26:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.250 11:26:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.250 11:26:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.250 11:26:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.250 11:26:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.250 11:26:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.250 11:26:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.250 11:26:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.250 11:26:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.250 11:26:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.250 11:26:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.250 11:26:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.250 11:26:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.250 11:26:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.250 11:26:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.250 11:26:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.250 11:26:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.250 11:26:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.250 11:26:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.250 11:26:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.250 11:26:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.250 11:26:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.250 11:26:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.250 11:26:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.250 11:26:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.250 11:26:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.250 11:26:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.250 11:26:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.527 11:27:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.527 11:27:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.527 11:27:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.527 11:27:01 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:31.527 11:27:01 -- setup/driver.sh@65 -- # setup reset 00:03:31.527 11:27:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.527 11:27:01 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:35.790 00:03:35.790 real 0m12.533s 00:03:35.790 user 0m2.341s 00:03:35.790 sys 0m4.633s 00:03:35.790 11:27:06 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:35.790 11:27:06 -- common/autotest_common.sh@10 -- # set +x 00:03:35.790 ************************************ 00:03:35.790 END TEST guess_driver 00:03:35.790 ************************************ 00:03:35.790 00:03:35.790 real 0m16.927s 00:03:35.790 user 0m3.511s 00:03:35.790 sys 0m7.104s 00:03:35.790 11:27:06 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:35.790 11:27:06 -- common/autotest_common.sh@10 -- # set +x 00:03:35.790 ************************************ 00:03:35.790 END TEST driver 00:03:35.790 ************************************ 00:03:35.790 11:27:06 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:03:35.790 11:27:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:35.790 11:27:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:35.790 11:27:06 -- common/autotest_common.sh@10 -- # set +x 00:03:35.790 ************************************ 00:03:35.790 START TEST devices 00:03:35.790 ************************************ 00:03:35.790 11:27:06 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:03:35.790 * Looking for test storage... 00:03:35.790 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:35.790 11:27:06 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:35.790 11:27:06 -- setup/devices.sh@192 -- # setup reset 00:03:35.790 11:27:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.790 11:27:06 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.986 11:27:09 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:39.987 11:27:09 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:39.987 11:27:09 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:39.987 11:27:09 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:39.987 11:27:09 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:39.987 11:27:09 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:39.987 11:27:09 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:39.987 11:27:09 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:39.987 11:27:09 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:39.987 11:27:09 -- setup/devices.sh@196 -- # blocks=() 00:03:39.987 11:27:09 -- setup/devices.sh@196 -- # declare -a blocks 00:03:39.987 11:27:09 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:39.987 11:27:09 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:39.987 11:27:09 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:39.987 11:27:09 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:39.987 11:27:09 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:39.987 11:27:09 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:39.987 11:27:09 -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:03:39.987 11:27:09 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:39.987 11:27:09 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:39.987 11:27:09 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:39.987 11:27:09 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:39.987 No valid GPT data, bailing 00:03:39.987 11:27:09 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:39.987 11:27:09 -- scripts/common.sh@391 -- # pt= 00:03:39.987 11:27:09 -- scripts/common.sh@392 -- # return 1 00:03:39.987 11:27:09 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:39.987 11:27:09 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:39.987 11:27:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:39.987 11:27:09 -- setup/common.sh@80 -- # echo 8001563222016 00:03:39.987 11:27:09 -- setup/devices.sh@204 -- # (( 8001563222016 >= min_disk_size )) 00:03:39.987 11:27:09 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:39.987 11:27:09 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5f:00.0 00:03:39.987 11:27:09 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:39.987 11:27:09 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:39.987 11:27:09 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:39.987 11:27:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:39.987 11:27:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:39.987 11:27:09 -- common/autotest_common.sh@10 -- # set +x 00:03:39.987 ************************************ 00:03:39.987 START TEST nvme_mount 00:03:39.987 ************************************ 00:03:39.987 11:27:10 -- common/autotest_common.sh@1121 -- # nvme_mount 00:03:39.987 11:27:10 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:39.987 11:27:10 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:39.987 11:27:10 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.987 11:27:10 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:39.987 11:27:10 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:39.987 11:27:10 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:39.987 11:27:10 -- setup/common.sh@40 -- # local part_no=1 00:03:39.987 11:27:10 -- setup/common.sh@41 -- # local size=1073741824 00:03:39.987 11:27:10 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:39.987 11:27:10 -- setup/common.sh@44 -- # parts=() 00:03:39.987 11:27:10 -- setup/common.sh@44 -- # local parts 00:03:39.987 11:27:10 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:39.987 11:27:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:39.987 11:27:10 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:39.987 11:27:10 -- setup/common.sh@46 -- # (( part++ )) 00:03:39.987 11:27:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:39.987 11:27:10 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:39.987 11:27:10 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:39.987 11:27:10 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:40.563 Creating new GPT entries in memory. 00:03:40.563 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:40.563 other utilities. 00:03:40.563 11:27:11 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:40.563 11:27:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:40.563 11:27:11 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:40.563 11:27:11 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:40.563 11:27:11 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:41.503 Creating new GPT entries in memory. 00:03:41.503 The operation has completed successfully. 00:03:41.503 11:27:12 -- setup/common.sh@57 -- # (( part++ )) 00:03:41.503 11:27:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.503 11:27:12 -- setup/common.sh@62 -- # wait 2868681 00:03:41.503 11:27:12 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.503 11:27:12 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:41.503 11:27:12 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.503 11:27:12 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:41.503 11:27:12 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:41.503 11:27:12 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.503 11:27:12 -- setup/devices.sh@105 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:41.504 11:27:12 -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:41.504 11:27:12 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:41.504 11:27:12 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.504 11:27:12 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:41.504 11:27:12 -- setup/devices.sh@53 -- # local found=0 00:03:41.504 11:27:12 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:41.504 11:27:12 -- setup/devices.sh@56 -- # : 00:03:41.504 11:27:12 -- setup/devices.sh@59 -- # local pci status 00:03:41.504 11:27:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.504 11:27:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:41.504 11:27:12 -- setup/devices.sh@47 -- # setup output config 00:03:41.504 11:27:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.504 11:27:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:44.794 11:27:14 -- setup/devices.sh@63 -- # found=1 00:03:44.794 11:27:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:44.794 11:27:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:44.794 11:27:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:44.794 11:27:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:44.794 11:27:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:44.794 11:27:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:44.794 11:27:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:44.794 11:27:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:44.794 11:27:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:44.794 11:27:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:44.794 11:27:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:44.794 11:27:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:44.794 11:27:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:44.794 11:27:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:44.794 11:27:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:44.794 11:27:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:44.794 11:27:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:44.794 11:27:15 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:44.794 11:27:15 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.794 11:27:15 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:44.794 11:27:15 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:44.794 11:27:15 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:44.794 11:27:15 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.794 11:27:15 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.794 11:27:15 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:44.794 11:27:15 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:44.794 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:44.794 11:27:15 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:44.794 11:27:15 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:44.794 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:44.794 /dev/nvme0n1: 8 bytes were erased at offset 0x74702555e00 (gpt): 45 46 49 20 50 41 52 54 00:03:44.794 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:44.794 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:44.794 11:27:15 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:44.794 11:27:15 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:44.794 11:27:15 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.794 11:27:15 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:44.794 11:27:15 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:44.794 11:27:15 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.794 11:27:15 -- setup/devices.sh@116 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:44.794 11:27:15 -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:44.794 11:27:15 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:44.794 11:27:15 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.794 11:27:15 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:44.794 11:27:15 -- setup/devices.sh@53 -- # local found=0 00:03:44.794 11:27:15 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:44.794 11:27:15 -- setup/devices.sh@56 -- # : 00:03:44.794 11:27:15 -- setup/devices.sh@59 -- # local pci status 00:03:44.794 11:27:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.794 11:27:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:44.794 11:27:15 -- setup/devices.sh@47 -- # setup output config 00:03:44.794 11:27:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.794 11:27:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:48.082 11:27:18 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:48.082 11:27:18 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:48.082 11:27:18 -- setup/devices.sh@63 -- # found=1 00:03:48.082 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.082 11:27:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:48.082 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.082 11:27:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:48.082 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.082 11:27:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:48.082 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.082 11:27:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:48.082 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.082 11:27:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:48.082 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.082 11:27:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:48.083 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.083 11:27:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:48.083 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.083 11:27:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:48.083 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.083 11:27:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:48.083 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.083 11:27:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:48.083 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.083 11:27:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:48.083 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.083 11:27:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:48.083 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.083 11:27:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:48.083 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.083 11:27:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:48.083 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.083 11:27:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:48.083 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.083 11:27:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:48.083 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.083 11:27:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:48.083 11:27:18 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:48.083 11:27:18 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.083 11:27:18 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:48.083 11:27:18 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:48.083 11:27:18 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.083 11:27:18 -- setup/devices.sh@125 -- # verify 0000:5f:00.0 data@nvme0n1 '' '' 00:03:48.083 11:27:18 -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:48.083 11:27:18 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:48.083 11:27:18 -- setup/devices.sh@50 -- # local mount_point= 00:03:48.083 11:27:18 -- setup/devices.sh@51 -- # local test_file= 00:03:48.083 11:27:18 -- setup/devices.sh@53 -- # local found=0 00:03:48.083 11:27:18 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:48.083 11:27:18 -- setup/devices.sh@59 -- # local pci status 00:03:48.083 11:27:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.083 11:27:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:48.083 11:27:18 -- setup/devices.sh@47 -- # setup output config 00:03:48.083 11:27:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.083 11:27:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:51.375 11:27:21 -- setup/devices.sh@63 -- # found=1 00:03:51.375 11:27:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.375 11:27:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.375 11:27:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:51.375 11:27:21 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:51.375 11:27:21 -- setup/devices.sh@68 -- # return 0 00:03:51.375 11:27:21 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:51.375 11:27:21 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.375 11:27:21 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:51.375 11:27:21 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:51.375 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:51.375 00:03:51.375 real 0m11.661s 00:03:51.375 user 0m3.315s 00:03:51.375 sys 0m6.190s 00:03:51.375 11:27:21 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:51.375 11:27:21 -- common/autotest_common.sh@10 -- # set +x 00:03:51.375 ************************************ 00:03:51.375 END TEST nvme_mount 00:03:51.375 ************************************ 00:03:51.375 11:27:21 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:51.376 11:27:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:51.376 11:27:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:51.376 11:27:21 -- common/autotest_common.sh@10 -- # set +x 00:03:51.376 ************************************ 00:03:51.376 START TEST dm_mount 00:03:51.376 ************************************ 00:03:51.376 11:27:21 -- common/autotest_common.sh@1121 -- # dm_mount 00:03:51.376 11:27:21 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:51.376 11:27:21 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:51.376 11:27:21 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:51.376 11:27:21 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:51.376 11:27:21 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:51.376 11:27:21 -- setup/common.sh@40 -- # local part_no=2 00:03:51.376 11:27:21 -- setup/common.sh@41 -- # local size=1073741824 00:03:51.376 11:27:21 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:51.376 11:27:21 -- setup/common.sh@44 -- # parts=() 00:03:51.376 11:27:21 -- setup/common.sh@44 -- # local parts 00:03:51.376 11:27:21 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:51.376 11:27:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.376 11:27:21 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:51.376 11:27:21 -- setup/common.sh@46 -- # (( part++ )) 00:03:51.376 11:27:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.376 11:27:21 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:51.376 11:27:21 -- setup/common.sh@46 -- # (( part++ )) 00:03:51.376 11:27:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.376 11:27:21 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:51.376 11:27:21 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:51.376 11:27:21 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:52.313 Creating new GPT entries in memory. 00:03:52.313 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:52.313 other utilities. 00:03:52.313 11:27:22 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:52.313 11:27:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.313 11:27:22 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:52.313 11:27:22 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:52.313 11:27:22 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:53.257 Creating new GPT entries in memory. 00:03:53.257 The operation has completed successfully. 00:03:53.257 11:27:23 -- setup/common.sh@57 -- # (( part++ )) 00:03:53.257 11:27:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:53.257 11:27:23 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:53.257 11:27:23 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:53.257 11:27:23 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:54.194 The operation has completed successfully. 00:03:54.194 11:27:24 -- setup/common.sh@57 -- # (( part++ )) 00:03:54.195 11:27:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:54.195 11:27:24 -- setup/common.sh@62 -- # wait 2872329 00:03:54.195 11:27:24 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:54.195 11:27:24 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:54.195 11:27:24 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:54.195 11:27:24 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:54.195 11:27:24 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:54.195 11:27:24 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:54.195 11:27:24 -- setup/devices.sh@161 -- # break 00:03:54.195 11:27:24 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:54.195 11:27:24 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:54.195 11:27:24 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:54.195 11:27:24 -- setup/devices.sh@166 -- # dm=dm-0 00:03:54.195 11:27:24 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:54.195 11:27:24 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:54.195 11:27:24 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:54.195 11:27:24 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:03:54.195 11:27:24 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:54.195 11:27:24 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:54.195 11:27:24 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:54.195 11:27:24 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:54.454 11:27:24 -- setup/devices.sh@174 -- # verify 0000:5f:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:54.454 11:27:24 -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:54.454 11:27:24 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:54.454 11:27:24 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:54.454 11:27:24 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:54.454 11:27:24 -- setup/devices.sh@53 -- # local found=0 00:03:54.454 11:27:24 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:54.454 11:27:24 -- setup/devices.sh@56 -- # : 00:03:54.454 11:27:24 -- setup/devices.sh@59 -- # local pci status 00:03:54.454 11:27:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.454 11:27:24 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:54.454 11:27:24 -- setup/devices.sh@47 -- # setup output config 00:03:54.454 11:27:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.454 11:27:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:56.991 11:27:27 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.991 11:27:27 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:56.992 11:27:27 -- setup/devices.sh@63 -- # found=1 00:03:56.992 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.992 11:27:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.992 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.992 11:27:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.992 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.992 11:27:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.992 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.992 11:27:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.992 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.992 11:27:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.992 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.992 11:27:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.992 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.992 11:27:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.992 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.992 11:27:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.992 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.992 11:27:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.992 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.992 11:27:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.992 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.992 11:27:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.992 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.992 11:27:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.992 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.992 11:27:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.992 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.992 11:27:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.992 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.992 11:27:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.992 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.992 11:27:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:56.992 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.251 11:27:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.251 11:27:27 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:57.251 11:27:27 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:57.251 11:27:27 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:57.251 11:27:27 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:57.251 11:27:27 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:57.251 11:27:27 -- setup/devices.sh@184 -- # verify 0000:5f:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:57.251 11:27:27 -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:57.251 11:27:27 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:57.251 11:27:27 -- setup/devices.sh@50 -- # local mount_point= 00:03:57.251 11:27:27 -- setup/devices.sh@51 -- # local test_file= 00:03:57.251 11:27:27 -- setup/devices.sh@53 -- # local found=0 00:03:57.251 11:27:27 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:57.251 11:27:27 -- setup/devices.sh@59 -- # local pci status 00:03:57.251 11:27:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.251 11:27:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:57.251 11:27:27 -- setup/devices.sh@47 -- # setup output config 00:03:57.251 11:27:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.251 11:27:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:59.787 11:27:30 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:59.787 11:27:30 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:59.787 11:27:30 -- setup/devices.sh@63 -- # found=1 00:03:59.787 11:27:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.787 11:27:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:59.787 11:27:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.787 11:27:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:59.787 11:27:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.787 11:27:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:59.787 11:27:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.787 11:27:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:59.787 11:27:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.787 11:27:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:59.787 11:27:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.787 11:27:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:59.787 11:27:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.787 11:27:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:59.787 11:27:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.787 11:27:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:59.788 11:27:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.788 11:27:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:59.788 11:27:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.788 11:27:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:59.788 11:27:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.788 11:27:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:59.788 11:27:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.788 11:27:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:59.788 11:27:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.788 11:27:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:59.788 11:27:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.788 11:27:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:59.788 11:27:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.788 11:27:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:59.788 11:27:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.788 11:27:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:59.788 11:27:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.048 11:27:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:00.048 11:27:30 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:00.048 11:27:30 -- setup/devices.sh@68 -- # return 0 00:04:00.048 11:27:30 -- setup/devices.sh@187 -- # cleanup_dm 00:04:00.048 11:27:30 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:00.048 11:27:30 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:00.048 11:27:30 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:00.048 11:27:30 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:00.048 11:27:30 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:00.048 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:00.048 11:27:30 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:00.048 11:27:30 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:00.048 00:04:00.048 real 0m8.880s 00:04:00.048 user 0m2.033s 00:04:00.048 sys 0m3.837s 00:04:00.048 11:27:30 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:00.048 11:27:30 -- common/autotest_common.sh@10 -- # set +x 00:04:00.048 ************************************ 00:04:00.048 END TEST dm_mount 00:04:00.048 ************************************ 00:04:00.048 11:27:30 -- setup/devices.sh@1 -- # cleanup 00:04:00.048 11:27:30 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:00.048 11:27:30 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.048 11:27:30 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:00.048 11:27:30 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:00.048 11:27:30 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:00.048 11:27:30 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:00.308 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:00.308 /dev/nvme0n1: 8 bytes were erased at offset 0x74702555e00 (gpt): 45 46 49 20 50 41 52 54 00:04:00.308 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:00.308 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:00.308 11:27:30 -- setup/devices.sh@12 -- # cleanup_dm 00:04:00.308 11:27:30 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:00.308 11:27:30 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:00.308 11:27:30 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:00.308 11:27:30 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:00.308 11:27:30 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:00.308 11:27:30 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:00.308 00:04:00.308 real 0m24.762s 00:04:00.308 user 0m6.773s 00:04:00.308 sys 0m12.733s 00:04:00.308 11:27:30 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:00.308 11:27:30 -- common/autotest_common.sh@10 -- # set +x 00:04:00.308 ************************************ 00:04:00.308 END TEST devices 00:04:00.308 ************************************ 00:04:00.308 00:04:00.308 real 1m32.203s 00:04:00.308 user 0m24.192s 00:04:00.308 sys 0m46.062s 00:04:00.308 11:27:30 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:00.308 11:27:31 -- common/autotest_common.sh@10 -- # set +x 00:04:00.308 ************************************ 00:04:00.308 END TEST setup.sh 00:04:00.308 ************************************ 00:04:00.308 11:27:31 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:02.844 Hugepages 00:04:02.844 node hugesize free / total 00:04:02.844 node0 1048576kB 0 / 0 00:04:02.844 node0 2048kB 2048 / 2048 00:04:02.844 node1 1048576kB 0 / 0 00:04:02.844 node1 2048kB 0 / 0 00:04:02.844 00:04:02.844 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:02.844 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:02.844 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:02.844 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:02.844 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:02.844 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:02.844 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:03.103 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:03.103 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:03.103 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:03.103 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:03.103 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:03.103 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:03.103 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:03.103 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:03.103 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:03.103 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:03.103 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:03.103 11:27:33 -- spdk/autotest.sh@130 -- # uname -s 00:04:03.103 11:27:33 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:03.103 11:27:33 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:03.103 11:27:33 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:06.394 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:06.394 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:06.394 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:06.394 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:06.394 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:06.394 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:06.394 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:06.394 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:06.394 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:06.394 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:06.394 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:06.394 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:06.394 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:06.394 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:06.394 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:06.394 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:11.780 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:11.780 11:27:41 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:12.348 11:27:42 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:12.348 11:27:42 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:12.348 11:27:42 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:12.348 11:27:42 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:12.348 11:27:42 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:12.348 11:27:42 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:12.348 11:27:42 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:12.348 11:27:42 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:12.348 11:27:42 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:12.348 11:27:42 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:12.348 11:27:42 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5f:00.0 00:04:12.348 11:27:42 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.890 Waiting for block devices as requested 00:04:14.890 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:04:14.890 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:14.890 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:15.149 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:15.149 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:15.149 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:15.408 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:15.408 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:15.408 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:15.408 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:15.669 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:15.669 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:15.669 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:15.928 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:15.928 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:15.928 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:16.188 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:16.188 11:27:46 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:16.188 11:27:46 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:5f:00.0 00:04:16.188 11:27:46 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:04:16.188 11:27:46 -- common/autotest_common.sh@1498 -- # grep 0000:5f:00.0/nvme/nvme 00:04:16.188 11:27:46 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:16.188 11:27:46 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 ]] 00:04:16.188 11:27:46 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:16.188 11:27:46 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:16.188 11:27:46 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:16.188 11:27:46 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:16.188 11:27:46 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:16.188 11:27:46 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:16.188 11:27:46 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:16.188 11:27:46 -- common/autotest_common.sh@1541 -- # oacs=' 0xe' 00:04:16.188 11:27:46 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:16.188 11:27:46 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:16.188 11:27:46 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:16.188 11:27:46 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:16.188 11:27:46 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:16.188 11:27:46 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:16.188 11:27:46 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:16.188 11:27:46 -- common/autotest_common.sh@1553 -- # continue 00:04:16.188 11:27:46 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:16.188 11:27:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.188 11:27:46 -- common/autotest_common.sh@10 -- # set +x 00:04:16.188 11:27:46 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:16.188 11:27:46 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:16.188 11:27:46 -- common/autotest_common.sh@10 -- # set +x 00:04:16.188 11:27:46 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:19.562 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:19.562 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:19.562 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:19.562 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:19.562 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:19.562 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:19.562 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:19.562 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:19.562 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:19.562 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:19.562 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:19.562 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:19.562 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:19.562 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:19.562 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:19.562 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:24.836 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:24.836 11:27:55 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:24.836 11:27:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.836 11:27:55 -- common/autotest_common.sh@10 -- # set +x 00:04:24.836 11:27:55 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:24.836 11:27:55 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:24.836 11:27:55 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:24.836 11:27:55 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:24.837 11:27:55 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:24.837 11:27:55 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:24.837 11:27:55 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:24.837 11:27:55 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:24.837 11:27:55 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.837 11:27:55 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:24.837 11:27:55 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:24.837 11:27:55 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:24.837 11:27:55 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5f:00.0 00:04:24.837 11:27:55 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:24.837 11:27:55 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:5f:00.0/device 00:04:24.837 11:27:55 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:04:24.837 11:27:55 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:24.837 11:27:55 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:04:24.837 11:27:55 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:5f:00.0 00:04:24.837 11:27:55 -- common/autotest_common.sh@1588 -- # [[ -z 0000:5f:00.0 ]] 00:04:24.837 11:27:55 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=2880501 00:04:24.837 11:27:55 -- common/autotest_common.sh@1594 -- # waitforlisten 2880501 00:04:24.837 11:27:55 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.837 11:27:55 -- common/autotest_common.sh@827 -- # '[' -z 2880501 ']' 00:04:24.837 11:27:55 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.837 11:27:55 -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:24.837 11:27:55 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.837 11:27:55 -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:24.837 11:27:55 -- common/autotest_common.sh@10 -- # set +x 00:04:24.837 [2024-05-15 11:27:55.373308] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:04:24.837 [2024-05-15 11:27:55.373373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880501 ] 00:04:24.837 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.837 [2024-05-15 11:27:55.446465] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.837 [2024-05-15 11:27:55.530582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.773 11:27:56 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:25.773 11:27:56 -- common/autotest_common.sh@860 -- # return 0 00:04:25.773 11:27:56 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:04:25.773 11:27:56 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:04:25.773 11:27:56 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5f:00.0 00:04:29.062 nvme0n1 00:04:29.062 11:27:59 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:29.062 [2024-05-15 11:27:59.329275] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:29.062 request: 00:04:29.062 { 00:04:29.062 "nvme_ctrlr_name": "nvme0", 00:04:29.062 "password": "test", 00:04:29.062 "method": "bdev_nvme_opal_revert", 00:04:29.062 "req_id": 1 00:04:29.062 } 00:04:29.062 Got JSON-RPC error response 00:04:29.062 response: 00:04:29.062 { 00:04:29.062 "code": -32602, 00:04:29.062 "message": "Invalid parameters" 00:04:29.062 } 00:04:29.062 11:27:59 -- common/autotest_common.sh@1600 -- # true 00:04:29.062 11:27:59 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:04:29.062 11:27:59 -- common/autotest_common.sh@1604 -- # killprocess 2880501 00:04:29.062 11:27:59 -- common/autotest_common.sh@946 -- # '[' -z 2880501 ']' 00:04:29.062 11:27:59 -- common/autotest_common.sh@950 -- # kill -0 2880501 00:04:29.062 11:27:59 -- common/autotest_common.sh@951 -- # uname 00:04:29.062 11:27:59 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:29.062 11:27:59 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2880501 00:04:29.062 11:27:59 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:29.062 11:27:59 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:29.062 11:27:59 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2880501' 00:04:29.062 killing process with pid 2880501 00:04:29.062 11:27:59 -- common/autotest_common.sh@965 -- # kill 2880501 00:04:29.062 11:27:59 -- common/autotest_common.sh@970 -- # wait 2880501 00:04:37.187 11:28:06 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:37.187 11:28:06 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:37.187 11:28:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:37.187 11:28:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:37.187 11:28:06 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:37.187 11:28:06 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:37.187 11:28:06 -- common/autotest_common.sh@10 -- # set +x 00:04:37.187 11:28:06 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:37.187 11:28:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:37.187 11:28:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.187 11:28:06 -- common/autotest_common.sh@10 -- # set +x 00:04:37.187 ************************************ 00:04:37.187 START TEST env 00:04:37.187 ************************************ 00:04:37.187 11:28:06 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:37.187 * Looking for test storage... 00:04:37.187 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:37.187 11:28:06 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:37.187 11:28:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:37.187 11:28:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.187 11:28:06 -- common/autotest_common.sh@10 -- # set +x 00:04:37.187 ************************************ 00:04:37.187 START TEST env_memory 00:04:37.187 ************************************ 00:04:37.187 11:28:06 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:37.187 00:04:37.187 00:04:37.187 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.187 http://cunit.sourceforge.net/ 00:04:37.187 00:04:37.187 00:04:37.187 Suite: memory 00:04:37.187 Test: alloc and free memory map ...[2024-05-15 11:28:06.867504] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:37.187 passed 00:04:37.187 Test: mem map translation ...[2024-05-15 11:28:06.887323] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:37.187 [2024-05-15 11:28:06.887343] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:37.187 [2024-05-15 11:28:06.887381] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:37.187 [2024-05-15 11:28:06.887392] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:37.187 passed 00:04:37.187 Test: mem map registration ...[2024-05-15 11:28:06.923500] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:37.187 [2024-05-15 11:28:06.923519] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:37.187 passed 00:04:37.187 Test: mem map adjacent registrations ...passed 00:04:37.187 00:04:37.187 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.187 suites 1 1 n/a 0 0 00:04:37.187 tests 4 4 4 0 0 00:04:37.187 asserts 152 152 152 0 n/a 00:04:37.187 00:04:37.187 Elapsed time = 0.138 seconds 00:04:37.187 00:04:37.187 real 0m0.150s 00:04:37.187 user 0m0.141s 00:04:37.187 sys 0m0.008s 00:04:37.187 11:28:06 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:37.187 11:28:06 -- common/autotest_common.sh@10 -- # set +x 00:04:37.187 ************************************ 00:04:37.187 END TEST env_memory 00:04:37.187 ************************************ 00:04:37.187 11:28:07 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:37.187 11:28:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:37.187 11:28:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.187 11:28:07 -- common/autotest_common.sh@10 -- # set +x 00:04:37.187 ************************************ 00:04:37.187 START TEST env_vtophys 00:04:37.187 ************************************ 00:04:37.187 11:28:07 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:37.187 EAL: lib.eal log level changed from notice to debug 00:04:37.187 EAL: Detected lcore 0 as core 0 on socket 0 00:04:37.187 EAL: Detected lcore 1 as core 1 on socket 0 00:04:37.187 EAL: Detected lcore 2 as core 2 on socket 0 00:04:37.187 EAL: Detected lcore 3 as core 3 on socket 0 00:04:37.187 EAL: Detected lcore 4 as core 4 on socket 0 00:04:37.187 EAL: Detected lcore 5 as core 8 on socket 0 00:04:37.187 EAL: Detected lcore 6 as core 9 on socket 0 00:04:37.187 EAL: Detected lcore 7 as core 10 on socket 0 00:04:37.187 EAL: Detected lcore 8 as core 11 on socket 0 00:04:37.187 EAL: Detected lcore 9 as core 16 on socket 0 00:04:37.187 EAL: Detected lcore 10 as core 17 on socket 0 00:04:37.187 EAL: Detected lcore 11 as core 18 on socket 0 00:04:37.187 EAL: Detected lcore 12 as core 19 on socket 0 00:04:37.187 EAL: Detected lcore 13 as core 20 on socket 0 00:04:37.187 EAL: Detected lcore 14 as core 24 on socket 0 00:04:37.187 EAL: Detected lcore 15 as core 25 on socket 0 00:04:37.187 EAL: Detected lcore 16 as core 26 on socket 0 00:04:37.187 EAL: Detected lcore 17 as core 27 on socket 0 00:04:37.187 EAL: Detected lcore 18 as core 0 on socket 1 00:04:37.187 EAL: Detected lcore 19 as core 1 on socket 1 00:04:37.187 EAL: Detected lcore 20 as core 2 on socket 1 00:04:37.187 EAL: Detected lcore 21 as core 3 on socket 1 00:04:37.187 EAL: Detected lcore 22 as core 4 on socket 1 00:04:37.187 EAL: Detected lcore 23 as core 8 on socket 1 00:04:37.187 EAL: Detected lcore 24 as core 9 on socket 1 00:04:37.187 EAL: Detected lcore 25 as core 10 on socket 1 00:04:37.187 EAL: Detected lcore 26 as core 11 on socket 1 00:04:37.187 EAL: Detected lcore 27 as core 16 on socket 1 00:04:37.187 EAL: Detected lcore 28 as core 17 on socket 1 00:04:37.187 EAL: Detected lcore 29 as core 18 on socket 1 00:04:37.187 EAL: Detected lcore 30 as core 19 on socket 1 00:04:37.187 EAL: Detected lcore 31 as core 20 on socket 1 00:04:37.187 EAL: Detected lcore 32 as core 24 on socket 1 00:04:37.187 EAL: Detected lcore 33 as core 25 on socket 1 00:04:37.187 EAL: Detected lcore 34 as core 26 on socket 1 00:04:37.187 EAL: Detected lcore 35 as core 27 on socket 1 00:04:37.187 EAL: Detected lcore 36 as core 0 on socket 0 00:04:37.187 EAL: Detected lcore 37 as core 1 on socket 0 00:04:37.187 EAL: Detected lcore 38 as core 2 on socket 0 00:04:37.187 EAL: Detected lcore 39 as core 3 on socket 0 00:04:37.187 EAL: Detected lcore 40 as core 4 on socket 0 00:04:37.187 EAL: Detected lcore 41 as core 8 on socket 0 00:04:37.187 EAL: Detected lcore 42 as core 9 on socket 0 00:04:37.187 EAL: Detected lcore 43 as core 10 on socket 0 00:04:37.187 EAL: Detected lcore 44 as core 11 on socket 0 00:04:37.187 EAL: Detected lcore 45 as core 16 on socket 0 00:04:37.187 EAL: Detected lcore 46 as core 17 on socket 0 00:04:37.187 EAL: Detected lcore 47 as core 18 on socket 0 00:04:37.187 EAL: Detected lcore 48 as core 19 on socket 0 00:04:37.187 EAL: Detected lcore 49 as core 20 on socket 0 00:04:37.187 EAL: Detected lcore 50 as core 24 on socket 0 00:04:37.187 EAL: Detected lcore 51 as core 25 on socket 0 00:04:37.187 EAL: Detected lcore 52 as core 26 on socket 0 00:04:37.187 EAL: Detected lcore 53 as core 27 on socket 0 00:04:37.187 EAL: Detected lcore 54 as core 0 on socket 1 00:04:37.187 EAL: Detected lcore 55 as core 1 on socket 1 00:04:37.187 EAL: Detected lcore 56 as core 2 on socket 1 00:04:37.187 EAL: Detected lcore 57 as core 3 on socket 1 00:04:37.187 EAL: Detected lcore 58 as core 4 on socket 1 00:04:37.187 EAL: Detected lcore 59 as core 8 on socket 1 00:04:37.187 EAL: Detected lcore 60 as core 9 on socket 1 00:04:37.187 EAL: Detected lcore 61 as core 10 on socket 1 00:04:37.187 EAL: Detected lcore 62 as core 11 on socket 1 00:04:37.187 EAL: Detected lcore 63 as core 16 on socket 1 00:04:37.187 EAL: Detected lcore 64 as core 17 on socket 1 00:04:37.187 EAL: Detected lcore 65 as core 18 on socket 1 00:04:37.187 EAL: Detected lcore 66 as core 19 on socket 1 00:04:37.187 EAL: Detected lcore 67 as core 20 on socket 1 00:04:37.187 EAL: Detected lcore 68 as core 24 on socket 1 00:04:37.187 EAL: Detected lcore 69 as core 25 on socket 1 00:04:37.187 EAL: Detected lcore 70 as core 26 on socket 1 00:04:37.187 EAL: Detected lcore 71 as core 27 on socket 1 00:04:37.187 EAL: Maximum logical cores by configuration: 128 00:04:37.187 EAL: Detected CPU lcores: 72 00:04:37.187 EAL: Detected NUMA nodes: 2 00:04:37.187 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:37.187 EAL: Detected shared linkage of DPDK 00:04:37.187 EAL: No shared files mode enabled, IPC will be disabled 00:04:37.187 EAL: Bus pci wants IOVA as 'DC' 00:04:37.187 EAL: Buses did not request a specific IOVA mode. 00:04:37.187 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:37.187 EAL: Selected IOVA mode 'VA' 00:04:37.187 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.187 EAL: Probing VFIO support... 00:04:37.187 EAL: IOMMU type 1 (Type 1) is supported 00:04:37.187 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:37.187 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:37.187 EAL: VFIO support initialized 00:04:37.187 EAL: Ask a virtual area of 0x2e000 bytes 00:04:37.187 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:37.187 EAL: Setting up physically contiguous memory... 00:04:37.187 EAL: Setting maximum number of open files to 524288 00:04:37.187 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:37.187 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:37.188 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:37.188 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.188 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:37.188 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.188 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.188 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:37.188 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:37.188 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.188 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:37.188 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.188 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.188 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:37.188 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:37.188 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.188 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:37.188 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.188 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.188 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:37.188 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:37.188 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.188 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:37.188 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.188 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.188 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:37.188 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:37.188 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:37.188 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.188 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:37.188 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:37.188 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.188 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:37.188 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:37.188 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.188 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:37.188 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:37.188 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.188 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:37.188 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:37.188 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.188 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:37.188 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:37.188 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.188 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:37.188 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:37.188 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.188 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:37.188 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:37.188 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.188 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:37.188 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:37.188 EAL: Hugepages will be freed exactly as allocated. 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: TSC frequency is ~2300000 KHz 00:04:37.188 EAL: Main lcore 0 is ready (tid=7fa19d7d8a00;cpuset=[0]) 00:04:37.188 EAL: Trying to obtain current memory policy. 00:04:37.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.188 EAL: Restoring previous memory policy: 0 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was expanded by 2MB 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:37.188 EAL: Mem event callback 'spdk:(nil)' registered 00:04:37.188 00:04:37.188 00:04:37.188 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.188 http://cunit.sourceforge.net/ 00:04:37.188 00:04:37.188 00:04:37.188 Suite: components_suite 00:04:37.188 Test: vtophys_malloc_test ...passed 00:04:37.188 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:37.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.188 EAL: Restoring previous memory policy: 4 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was expanded by 4MB 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was shrunk by 4MB 00:04:37.188 EAL: Trying to obtain current memory policy. 00:04:37.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.188 EAL: Restoring previous memory policy: 4 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was expanded by 6MB 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was shrunk by 6MB 00:04:37.188 EAL: Trying to obtain current memory policy. 00:04:37.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.188 EAL: Restoring previous memory policy: 4 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was expanded by 10MB 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was shrunk by 10MB 00:04:37.188 EAL: Trying to obtain current memory policy. 00:04:37.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.188 EAL: Restoring previous memory policy: 4 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was expanded by 18MB 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was shrunk by 18MB 00:04:37.188 EAL: Trying to obtain current memory policy. 00:04:37.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.188 EAL: Restoring previous memory policy: 4 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was expanded by 34MB 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was shrunk by 34MB 00:04:37.188 EAL: Trying to obtain current memory policy. 00:04:37.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.188 EAL: Restoring previous memory policy: 4 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was expanded by 66MB 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was shrunk by 66MB 00:04:37.188 EAL: Trying to obtain current memory policy. 00:04:37.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.188 EAL: Restoring previous memory policy: 4 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was expanded by 130MB 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was shrunk by 130MB 00:04:37.188 EAL: Trying to obtain current memory policy. 00:04:37.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.188 EAL: Restoring previous memory policy: 4 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was expanded by 258MB 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was shrunk by 258MB 00:04:37.188 EAL: Trying to obtain current memory policy. 00:04:37.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.188 EAL: Restoring previous memory policy: 4 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was expanded by 514MB 00:04:37.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.188 EAL: request: mp_malloc_sync 00:04:37.188 EAL: No shared files mode enabled, IPC is disabled 00:04:37.188 EAL: Heap on socket 0 was shrunk by 514MB 00:04:37.188 EAL: Trying to obtain current memory policy. 00:04:37.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.448 EAL: Restoring previous memory policy: 4 00:04:37.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.448 EAL: request: mp_malloc_sync 00:04:37.448 EAL: No shared files mode enabled, IPC is disabled 00:04:37.448 EAL: Heap on socket 0 was expanded by 1026MB 00:04:37.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.707 EAL: request: mp_malloc_sync 00:04:37.707 EAL: No shared files mode enabled, IPC is disabled 00:04:37.707 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:37.707 passed 00:04:37.707 00:04:37.707 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.707 suites 1 1 n/a 0 0 00:04:37.707 tests 2 2 2 0 0 00:04:37.707 asserts 497 497 497 0 n/a 00:04:37.707 00:04:37.707 Elapsed time = 1.116 seconds 00:04:37.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.707 EAL: request: mp_malloc_sync 00:04:37.707 EAL: No shared files mode enabled, IPC is disabled 00:04:37.707 EAL: Heap on socket 0 was shrunk by 2MB 00:04:37.707 EAL: No shared files mode enabled, IPC is disabled 00:04:37.707 EAL: No shared files mode enabled, IPC is disabled 00:04:37.707 EAL: No shared files mode enabled, IPC is disabled 00:04:37.707 00:04:37.707 real 0m1.256s 00:04:37.707 user 0m0.709s 00:04:37.707 sys 0m0.511s 00:04:37.707 11:28:08 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:37.707 11:28:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.707 ************************************ 00:04:37.707 END TEST env_vtophys 00:04:37.707 ************************************ 00:04:37.707 11:28:08 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:37.707 11:28:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:37.707 11:28:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.707 11:28:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.707 ************************************ 00:04:37.707 START TEST env_pci 00:04:37.707 ************************************ 00:04:37.707 11:28:08 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:37.707 00:04:37.707 00:04:37.707 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.707 http://cunit.sourceforge.net/ 00:04:37.707 00:04:37.707 00:04:37.707 Suite: pci 00:04:37.707 Test: pci_hook ...[2024-05-15 11:28:08.428299] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2882236 has claimed it 00:04:37.707 EAL: Cannot find device (10000:00:01.0) 00:04:37.707 EAL: Failed to attach device on primary process 00:04:37.707 passed 00:04:37.707 00:04:37.707 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.708 suites 1 1 n/a 0 0 00:04:37.708 tests 1 1 1 0 0 00:04:37.708 asserts 25 25 25 0 n/a 00:04:37.708 00:04:37.708 Elapsed time = 0.032 seconds 00:04:37.708 00:04:37.708 real 0m0.054s 00:04:37.708 user 0m0.015s 00:04:37.708 sys 0m0.039s 00:04:37.708 11:28:08 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:37.708 11:28:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.708 ************************************ 00:04:37.708 END TEST env_pci 00:04:37.708 ************************************ 00:04:37.967 11:28:08 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:37.967 11:28:08 -- env/env.sh@15 -- # uname 00:04:37.967 11:28:08 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:37.967 11:28:08 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:37.967 11:28:08 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.967 11:28:08 -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:04:37.967 11:28:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.967 11:28:08 -- common/autotest_common.sh@10 -- # set +x 00:04:37.967 ************************************ 00:04:37.967 START TEST env_dpdk_post_init 00:04:37.967 ************************************ 00:04:37.967 11:28:08 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.967 EAL: Detected CPU lcores: 72 00:04:37.967 EAL: Detected NUMA nodes: 2 00:04:37.967 EAL: Detected shared linkage of DPDK 00:04:37.967 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.967 EAL: Selected IOVA mode 'VA' 00:04:37.967 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.967 EAL: VFIO support initialized 00:04:37.967 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.967 EAL: Using IOMMU type 1 (Type 1) 00:04:37.967 EAL: Ignore mapping IO port bar(1) 00:04:37.967 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:37.967 EAL: Ignore mapping IO port bar(1) 00:04:37.967 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:37.967 EAL: Ignore mapping IO port bar(1) 00:04:37.967 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:38.226 EAL: Ignore mapping IO port bar(1) 00:04:38.226 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:38.226 EAL: Ignore mapping IO port bar(1) 00:04:38.226 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:38.226 EAL: Ignore mapping IO port bar(1) 00:04:38.226 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:38.226 EAL: Ignore mapping IO port bar(1) 00:04:38.226 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:38.226 EAL: Ignore mapping IO port bar(1) 00:04:38.226 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:38.796 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5f:00.0 (socket 0) 00:04:38.796 EAL: Ignore mapping IO port bar(1) 00:04:38.796 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:38.796 EAL: Ignore mapping IO port bar(1) 00:04:38.796 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:38.796 EAL: Ignore mapping IO port bar(1) 00:04:38.796 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:39.056 EAL: Ignore mapping IO port bar(1) 00:04:39.056 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:39.056 EAL: Ignore mapping IO port bar(1) 00:04:39.056 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:39.056 EAL: Ignore mapping IO port bar(1) 00:04:39.056 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:39.056 EAL: Ignore mapping IO port bar(1) 00:04:39.056 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:39.056 EAL: Ignore mapping IO port bar(1) 00:04:39.056 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:49.043 EAL: Releasing PCI mapped resource for 0000:5f:00.0 00:04:49.043 EAL: Calling pci_unmap_resource for 0000:5f:00.0 at 0x202001020000 00:04:49.043 Starting DPDK initialization... 00:04:49.043 Starting SPDK post initialization... 00:04:49.043 SPDK NVMe probe 00:04:49.043 Attaching to 0000:5f:00.0 00:04:49.043 Attached to 0000:5f:00.0 00:04:49.043 Cleaning up... 00:04:49.043 00:04:49.043 real 0m9.964s 00:04:49.043 user 0m7.761s 00:04:49.043 sys 0m1.254s 00:04:49.043 11:28:18 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.043 11:28:18 -- common/autotest_common.sh@10 -- # set +x 00:04:49.043 ************************************ 00:04:49.043 END TEST env_dpdk_post_init 00:04:49.043 ************************************ 00:04:49.043 11:28:18 -- env/env.sh@26 -- # uname 00:04:49.043 11:28:18 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:49.043 11:28:18 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.043 11:28:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:49.043 11:28:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.043 11:28:18 -- common/autotest_common.sh@10 -- # set +x 00:04:49.043 ************************************ 00:04:49.043 START TEST env_mem_callbacks 00:04:49.043 ************************************ 00:04:49.043 11:28:18 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.043 EAL: Detected CPU lcores: 72 00:04:49.043 EAL: Detected NUMA nodes: 2 00:04:49.043 EAL: Detected shared linkage of DPDK 00:04:49.043 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.043 EAL: Selected IOVA mode 'VA' 00:04:49.043 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.043 EAL: VFIO support initialized 00:04:49.043 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.043 00:04:49.043 00:04:49.043 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.043 http://cunit.sourceforge.net/ 00:04:49.043 00:04:49.043 00:04:49.043 Suite: memory 00:04:49.043 Test: test ... 00:04:49.043 register 0x200000200000 2097152 00:04:49.043 malloc 3145728 00:04:49.043 register 0x200000400000 4194304 00:04:49.043 buf 0x200000500000 len 3145728 PASSED 00:04:49.043 malloc 64 00:04:49.043 buf 0x2000004fff40 len 64 PASSED 00:04:49.043 malloc 4194304 00:04:49.044 register 0x200000800000 6291456 00:04:49.044 buf 0x200000a00000 len 4194304 PASSED 00:04:49.044 free 0x200000500000 3145728 00:04:49.044 free 0x2000004fff40 64 00:04:49.044 unregister 0x200000400000 4194304 PASSED 00:04:49.044 free 0x200000a00000 4194304 00:04:49.044 unregister 0x200000800000 6291456 PASSED 00:04:49.044 malloc 8388608 00:04:49.044 register 0x200000400000 10485760 00:04:49.044 buf 0x200000600000 len 8388608 PASSED 00:04:49.044 free 0x200000600000 8388608 00:04:49.044 unregister 0x200000400000 10485760 PASSED 00:04:49.044 passed 00:04:49.044 00:04:49.044 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.044 suites 1 1 n/a 0 0 00:04:49.044 tests 1 1 1 0 0 00:04:49.044 asserts 15 15 15 0 n/a 00:04:49.044 00:04:49.044 Elapsed time = 0.006 seconds 00:04:49.044 00:04:49.044 real 0m0.071s 00:04:49.044 user 0m0.028s 00:04:49.044 sys 0m0.042s 00:04:49.044 11:28:18 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.044 11:28:18 -- common/autotest_common.sh@10 -- # set +x 00:04:49.044 ************************************ 00:04:49.044 END TEST env_mem_callbacks 00:04:49.044 ************************************ 00:04:49.044 00:04:49.044 real 0m12.046s 00:04:49.044 user 0m8.838s 00:04:49.044 sys 0m2.230s 00:04:49.044 11:28:18 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.044 11:28:18 -- common/autotest_common.sh@10 -- # set +x 00:04:49.044 ************************************ 00:04:49.044 END TEST env 00:04:49.044 ************************************ 00:04:49.044 11:28:18 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:49.044 11:28:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:49.044 11:28:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.044 11:28:18 -- common/autotest_common.sh@10 -- # set +x 00:04:49.044 ************************************ 00:04:49.044 START TEST rpc 00:04:49.044 ************************************ 00:04:49.044 11:28:18 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:49.044 * Looking for test storage... 00:04:49.044 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:49.044 11:28:18 -- rpc/rpc.sh@65 -- # spdk_pid=2883807 00:04:49.044 11:28:18 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:49.044 11:28:18 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.044 11:28:18 -- rpc/rpc.sh@67 -- # waitforlisten 2883807 00:04:49.044 11:28:18 -- common/autotest_common.sh@827 -- # '[' -z 2883807 ']' 00:04:49.044 11:28:18 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.044 11:28:18 -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:49.044 11:28:18 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.044 11:28:18 -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:49.044 11:28:18 -- common/autotest_common.sh@10 -- # set +x 00:04:49.044 [2024-05-15 11:28:18.992474] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:04:49.044 [2024-05-15 11:28:18.992536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883807 ] 00:04:49.044 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.044 [2024-05-15 11:28:19.064841] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.044 [2024-05-15 11:28:19.153402] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:49.044 [2024-05-15 11:28:19.153444] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2883807' to capture a snapshot of events at runtime. 00:04:49.044 [2024-05-15 11:28:19.153454] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:49.044 [2024-05-15 11:28:19.153462] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:49.044 [2024-05-15 11:28:19.153469] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2883807 for offline analysis/debug. 00:04:49.044 [2024-05-15 11:28:19.153500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.044 11:28:19 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:49.044 11:28:19 -- common/autotest_common.sh@860 -- # return 0 00:04:49.044 11:28:19 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:49.044 11:28:19 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:49.044 11:28:19 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:49.044 11:28:19 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:49.044 11:28:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:49.044 11:28:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.044 11:28:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.309 ************************************ 00:04:49.309 START TEST rpc_integrity 00:04:49.309 ************************************ 00:04:49.309 11:28:19 -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:49.309 11:28:19 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:49.309 11:28:19 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.309 11:28:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.309 11:28:19 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.309 11:28:19 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:49.309 11:28:19 -- rpc/rpc.sh@13 -- # jq length 00:04:49.309 11:28:19 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:49.309 11:28:19 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:49.309 11:28:19 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.309 11:28:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.309 11:28:19 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.309 11:28:19 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:49.309 11:28:19 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:49.309 11:28:19 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.309 11:28:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.309 11:28:19 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.309 11:28:19 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:49.309 { 00:04:49.309 "name": "Malloc0", 00:04:49.309 "aliases": [ 00:04:49.309 "464cd6ea-1e2a-402e-b037-123087ee815f" 00:04:49.309 ], 00:04:49.309 "product_name": "Malloc disk", 00:04:49.309 "block_size": 512, 00:04:49.309 "num_blocks": 16384, 00:04:49.309 "uuid": "464cd6ea-1e2a-402e-b037-123087ee815f", 00:04:49.309 "assigned_rate_limits": { 00:04:49.309 "rw_ios_per_sec": 0, 00:04:49.309 "rw_mbytes_per_sec": 0, 00:04:49.309 "r_mbytes_per_sec": 0, 00:04:49.309 "w_mbytes_per_sec": 0 00:04:49.309 }, 00:04:49.309 "claimed": false, 00:04:49.309 "zoned": false, 00:04:49.309 "supported_io_types": { 00:04:49.309 "read": true, 00:04:49.309 "write": true, 00:04:49.309 "unmap": true, 00:04:49.309 "write_zeroes": true, 00:04:49.309 "flush": true, 00:04:49.309 "reset": true, 00:04:49.309 "compare": false, 00:04:49.309 "compare_and_write": false, 00:04:49.309 "abort": true, 00:04:49.309 "nvme_admin": false, 00:04:49.309 "nvme_io": false 00:04:49.309 }, 00:04:49.309 "memory_domains": [ 00:04:49.309 { 00:04:49.309 "dma_device_id": "system", 00:04:49.309 "dma_device_type": 1 00:04:49.309 }, 00:04:49.309 { 00:04:49.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.309 "dma_device_type": 2 00:04:49.309 } 00:04:49.309 ], 00:04:49.309 "driver_specific": {} 00:04:49.309 } 00:04:49.309 ]' 00:04:49.309 11:28:19 -- rpc/rpc.sh@17 -- # jq length 00:04:49.309 11:28:19 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:49.309 11:28:19 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:49.309 11:28:19 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.309 11:28:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.309 [2024-05-15 11:28:19.985220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:49.309 [2024-05-15 11:28:19.985254] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:49.309 [2024-05-15 11:28:19.985268] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17aaf70 00:04:49.309 [2024-05-15 11:28:19.985277] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:49.309 [2024-05-15 11:28:19.986374] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:49.309 [2024-05-15 11:28:19.986397] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:49.309 Passthru0 00:04:49.309 11:28:19 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.309 11:28:19 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:49.309 11:28:19 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.309 11:28:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.309 11:28:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.309 11:28:20 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:49.309 { 00:04:49.309 "name": "Malloc0", 00:04:49.309 "aliases": [ 00:04:49.309 "464cd6ea-1e2a-402e-b037-123087ee815f" 00:04:49.309 ], 00:04:49.309 "product_name": "Malloc disk", 00:04:49.309 "block_size": 512, 00:04:49.309 "num_blocks": 16384, 00:04:49.309 "uuid": "464cd6ea-1e2a-402e-b037-123087ee815f", 00:04:49.309 "assigned_rate_limits": { 00:04:49.309 "rw_ios_per_sec": 0, 00:04:49.309 "rw_mbytes_per_sec": 0, 00:04:49.309 "r_mbytes_per_sec": 0, 00:04:49.309 "w_mbytes_per_sec": 0 00:04:49.309 }, 00:04:49.309 "claimed": true, 00:04:49.309 "claim_type": "exclusive_write", 00:04:49.309 "zoned": false, 00:04:49.309 "supported_io_types": { 00:04:49.309 "read": true, 00:04:49.309 "write": true, 00:04:49.309 "unmap": true, 00:04:49.309 "write_zeroes": true, 00:04:49.309 "flush": true, 00:04:49.309 "reset": true, 00:04:49.309 "compare": false, 00:04:49.309 "compare_and_write": false, 00:04:49.309 "abort": true, 00:04:49.309 "nvme_admin": false, 00:04:49.309 "nvme_io": false 00:04:49.309 }, 00:04:49.309 "memory_domains": [ 00:04:49.309 { 00:04:49.309 "dma_device_id": "system", 00:04:49.309 "dma_device_type": 1 00:04:49.309 }, 00:04:49.309 { 00:04:49.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.309 "dma_device_type": 2 00:04:49.309 } 00:04:49.309 ], 00:04:49.309 "driver_specific": {} 00:04:49.309 }, 00:04:49.309 { 00:04:49.309 "name": "Passthru0", 00:04:49.309 "aliases": [ 00:04:49.309 "119c5241-0361-5b6c-9499-e07f2c2f65f6" 00:04:49.309 ], 00:04:49.309 "product_name": "passthru", 00:04:49.309 "block_size": 512, 00:04:49.309 "num_blocks": 16384, 00:04:49.309 "uuid": "119c5241-0361-5b6c-9499-e07f2c2f65f6", 00:04:49.309 "assigned_rate_limits": { 00:04:49.309 "rw_ios_per_sec": 0, 00:04:49.309 "rw_mbytes_per_sec": 0, 00:04:49.309 "r_mbytes_per_sec": 0, 00:04:49.309 "w_mbytes_per_sec": 0 00:04:49.309 }, 00:04:49.309 "claimed": false, 00:04:49.309 "zoned": false, 00:04:49.309 "supported_io_types": { 00:04:49.309 "read": true, 00:04:49.309 "write": true, 00:04:49.309 "unmap": true, 00:04:49.309 "write_zeroes": true, 00:04:49.309 "flush": true, 00:04:49.309 "reset": true, 00:04:49.309 "compare": false, 00:04:49.309 "compare_and_write": false, 00:04:49.309 "abort": true, 00:04:49.309 "nvme_admin": false, 00:04:49.309 "nvme_io": false 00:04:49.309 }, 00:04:49.309 "memory_domains": [ 00:04:49.309 { 00:04:49.309 "dma_device_id": "system", 00:04:49.309 "dma_device_type": 1 00:04:49.309 }, 00:04:49.309 { 00:04:49.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.309 "dma_device_type": 2 00:04:49.309 } 00:04:49.309 ], 00:04:49.309 "driver_specific": { 00:04:49.309 "passthru": { 00:04:49.309 "name": "Passthru0", 00:04:49.309 "base_bdev_name": "Malloc0" 00:04:49.309 } 00:04:49.309 } 00:04:49.309 } 00:04:49.309 ]' 00:04:49.309 11:28:20 -- rpc/rpc.sh@21 -- # jq length 00:04:49.309 11:28:20 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:49.309 11:28:20 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:49.309 11:28:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.309 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.569 11:28:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.569 11:28:20 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:49.569 11:28:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.569 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.569 11:28:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.569 11:28:20 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:49.569 11:28:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.569 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.569 11:28:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.569 11:28:20 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:49.569 11:28:20 -- rpc/rpc.sh@26 -- # jq length 00:04:49.569 11:28:20 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:49.569 00:04:49.569 real 0m0.298s 00:04:49.569 user 0m0.192s 00:04:49.569 sys 0m0.042s 00:04:49.569 11:28:20 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.569 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.569 ************************************ 00:04:49.569 END TEST rpc_integrity 00:04:49.569 ************************************ 00:04:49.569 11:28:20 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:49.569 11:28:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:49.569 11:28:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.569 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.569 ************************************ 00:04:49.569 START TEST rpc_plugins 00:04:49.569 ************************************ 00:04:49.569 11:28:20 -- common/autotest_common.sh@1121 -- # rpc_plugins 00:04:49.569 11:28:20 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:49.569 11:28:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.569 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.569 11:28:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.569 11:28:20 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:49.569 11:28:20 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:49.569 11:28:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.569 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.569 11:28:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.569 11:28:20 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:49.569 { 00:04:49.569 "name": "Malloc1", 00:04:49.569 "aliases": [ 00:04:49.569 "05c8adff-92fa-4a67-8a55-b170c46a67d9" 00:04:49.569 ], 00:04:49.569 "product_name": "Malloc disk", 00:04:49.569 "block_size": 4096, 00:04:49.569 "num_blocks": 256, 00:04:49.569 "uuid": "05c8adff-92fa-4a67-8a55-b170c46a67d9", 00:04:49.569 "assigned_rate_limits": { 00:04:49.569 "rw_ios_per_sec": 0, 00:04:49.569 "rw_mbytes_per_sec": 0, 00:04:49.569 "r_mbytes_per_sec": 0, 00:04:49.569 "w_mbytes_per_sec": 0 00:04:49.569 }, 00:04:49.569 "claimed": false, 00:04:49.569 "zoned": false, 00:04:49.569 "supported_io_types": { 00:04:49.569 "read": true, 00:04:49.569 "write": true, 00:04:49.569 "unmap": true, 00:04:49.569 "write_zeroes": true, 00:04:49.569 "flush": true, 00:04:49.569 "reset": true, 00:04:49.569 "compare": false, 00:04:49.569 "compare_and_write": false, 00:04:49.569 "abort": true, 00:04:49.569 "nvme_admin": false, 00:04:49.569 "nvme_io": false 00:04:49.569 }, 00:04:49.569 "memory_domains": [ 00:04:49.569 { 00:04:49.569 "dma_device_id": "system", 00:04:49.569 "dma_device_type": 1 00:04:49.569 }, 00:04:49.569 { 00:04:49.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.569 "dma_device_type": 2 00:04:49.569 } 00:04:49.569 ], 00:04:49.569 "driver_specific": {} 00:04:49.569 } 00:04:49.569 ]' 00:04:49.569 11:28:20 -- rpc/rpc.sh@32 -- # jq length 00:04:49.569 11:28:20 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:49.569 11:28:20 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:49.569 11:28:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.569 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.569 11:28:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.569 11:28:20 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:49.569 11:28:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.569 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.829 11:28:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.829 11:28:20 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:49.829 11:28:20 -- rpc/rpc.sh@36 -- # jq length 00:04:49.829 11:28:20 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:49.829 00:04:49.829 real 0m0.151s 00:04:49.829 user 0m0.088s 00:04:49.829 sys 0m0.027s 00:04:49.829 11:28:20 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.829 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.829 ************************************ 00:04:49.829 END TEST rpc_plugins 00:04:49.829 ************************************ 00:04:49.829 11:28:20 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:49.829 11:28:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:49.829 11:28:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.829 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.829 ************************************ 00:04:49.829 START TEST rpc_trace_cmd_test 00:04:49.829 ************************************ 00:04:49.829 11:28:20 -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:04:49.829 11:28:20 -- rpc/rpc.sh@40 -- # local info 00:04:49.829 11:28:20 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:49.829 11:28:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.829 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.829 11:28:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.829 11:28:20 -- rpc/rpc.sh@42 -- # info='{ 00:04:49.829 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2883807", 00:04:49.829 "tpoint_group_mask": "0x8", 00:04:49.829 "iscsi_conn": { 00:04:49.829 "mask": "0x2", 00:04:49.829 "tpoint_mask": "0x0" 00:04:49.829 }, 00:04:49.829 "scsi": { 00:04:49.829 "mask": "0x4", 00:04:49.829 "tpoint_mask": "0x0" 00:04:49.829 }, 00:04:49.829 "bdev": { 00:04:49.829 "mask": "0x8", 00:04:49.829 "tpoint_mask": "0xffffffffffffffff" 00:04:49.829 }, 00:04:49.829 "nvmf_rdma": { 00:04:49.829 "mask": "0x10", 00:04:49.829 "tpoint_mask": "0x0" 00:04:49.829 }, 00:04:49.830 "nvmf_tcp": { 00:04:49.830 "mask": "0x20", 00:04:49.830 "tpoint_mask": "0x0" 00:04:49.830 }, 00:04:49.830 "ftl": { 00:04:49.830 "mask": "0x40", 00:04:49.830 "tpoint_mask": "0x0" 00:04:49.830 }, 00:04:49.830 "blobfs": { 00:04:49.830 "mask": "0x80", 00:04:49.830 "tpoint_mask": "0x0" 00:04:49.830 }, 00:04:49.830 "dsa": { 00:04:49.830 "mask": "0x200", 00:04:49.830 "tpoint_mask": "0x0" 00:04:49.830 }, 00:04:49.830 "thread": { 00:04:49.830 "mask": "0x400", 00:04:49.830 "tpoint_mask": "0x0" 00:04:49.830 }, 00:04:49.830 "nvme_pcie": { 00:04:49.830 "mask": "0x800", 00:04:49.830 "tpoint_mask": "0x0" 00:04:49.830 }, 00:04:49.830 "iaa": { 00:04:49.830 "mask": "0x1000", 00:04:49.830 "tpoint_mask": "0x0" 00:04:49.830 }, 00:04:49.830 "nvme_tcp": { 00:04:49.830 "mask": "0x2000", 00:04:49.830 "tpoint_mask": "0x0" 00:04:49.830 }, 00:04:49.830 "bdev_nvme": { 00:04:49.830 "mask": "0x4000", 00:04:49.830 "tpoint_mask": "0x0" 00:04:49.830 }, 00:04:49.830 "sock": { 00:04:49.830 "mask": "0x8000", 00:04:49.830 "tpoint_mask": "0x0" 00:04:49.830 } 00:04:49.830 }' 00:04:49.830 11:28:20 -- rpc/rpc.sh@43 -- # jq length 00:04:49.830 11:28:20 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:49.830 11:28:20 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:49.830 11:28:20 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:49.830 11:28:20 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:50.089 11:28:20 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:50.089 11:28:20 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:50.089 11:28:20 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:50.089 11:28:20 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:50.089 11:28:20 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:50.089 00:04:50.089 real 0m0.202s 00:04:50.089 user 0m0.159s 00:04:50.089 sys 0m0.037s 00:04:50.089 11:28:20 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.089 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.089 ************************************ 00:04:50.089 END TEST rpc_trace_cmd_test 00:04:50.089 ************************************ 00:04:50.089 11:28:20 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:50.089 11:28:20 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:50.089 11:28:20 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:50.089 11:28:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.089 11:28:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.089 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.089 ************************************ 00:04:50.089 START TEST rpc_daemon_integrity 00:04:50.089 ************************************ 00:04:50.089 11:28:20 -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:50.089 11:28:20 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.089 11:28:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.089 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.090 11:28:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.090 11:28:20 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.090 11:28:20 -- rpc/rpc.sh@13 -- # jq length 00:04:50.090 11:28:20 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.090 11:28:20 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.090 11:28:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.090 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.090 11:28:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.090 11:28:20 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:50.090 11:28:20 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.090 11:28:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.090 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.090 11:28:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.090 11:28:20 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.090 { 00:04:50.090 "name": "Malloc2", 00:04:50.090 "aliases": [ 00:04:50.090 "e903fbc0-9949-4aee-bbdb-980e5f1dfeb1" 00:04:50.090 ], 00:04:50.090 "product_name": "Malloc disk", 00:04:50.090 "block_size": 512, 00:04:50.090 "num_blocks": 16384, 00:04:50.090 "uuid": "e903fbc0-9949-4aee-bbdb-980e5f1dfeb1", 00:04:50.090 "assigned_rate_limits": { 00:04:50.090 "rw_ios_per_sec": 0, 00:04:50.090 "rw_mbytes_per_sec": 0, 00:04:50.090 "r_mbytes_per_sec": 0, 00:04:50.090 "w_mbytes_per_sec": 0 00:04:50.090 }, 00:04:50.090 "claimed": false, 00:04:50.090 "zoned": false, 00:04:50.090 "supported_io_types": { 00:04:50.090 "read": true, 00:04:50.090 "write": true, 00:04:50.090 "unmap": true, 00:04:50.090 "write_zeroes": true, 00:04:50.090 "flush": true, 00:04:50.090 "reset": true, 00:04:50.090 "compare": false, 00:04:50.090 "compare_and_write": false, 00:04:50.090 "abort": true, 00:04:50.090 "nvme_admin": false, 00:04:50.090 "nvme_io": false 00:04:50.090 }, 00:04:50.090 "memory_domains": [ 00:04:50.090 { 00:04:50.090 "dma_device_id": "system", 00:04:50.090 "dma_device_type": 1 00:04:50.090 }, 00:04:50.090 { 00:04:50.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.090 "dma_device_type": 2 00:04:50.090 } 00:04:50.090 ], 00:04:50.090 "driver_specific": {} 00:04:50.090 } 00:04:50.090 ]' 00:04:50.090 11:28:20 -- rpc/rpc.sh@17 -- # jq length 00:04:50.350 11:28:20 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.350 11:28:20 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:50.350 11:28:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.350 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.350 [2024-05-15 11:28:20.887660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:50.350 [2024-05-15 11:28:20.887693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.350 [2024-05-15 11:28:20.887709] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17ac290 00:04:50.350 [2024-05-15 11:28:20.887717] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.350 [2024-05-15 11:28:20.888724] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.350 [2024-05-15 11:28:20.888749] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.350 Passthru0 00:04:50.350 11:28:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.350 11:28:20 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.350 11:28:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.350 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.350 11:28:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.350 11:28:20 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.350 { 00:04:50.350 "name": "Malloc2", 00:04:50.350 "aliases": [ 00:04:50.350 "e903fbc0-9949-4aee-bbdb-980e5f1dfeb1" 00:04:50.350 ], 00:04:50.350 "product_name": "Malloc disk", 00:04:50.350 "block_size": 512, 00:04:50.350 "num_blocks": 16384, 00:04:50.350 "uuid": "e903fbc0-9949-4aee-bbdb-980e5f1dfeb1", 00:04:50.350 "assigned_rate_limits": { 00:04:50.350 "rw_ios_per_sec": 0, 00:04:50.350 "rw_mbytes_per_sec": 0, 00:04:50.350 "r_mbytes_per_sec": 0, 00:04:50.350 "w_mbytes_per_sec": 0 00:04:50.350 }, 00:04:50.350 "claimed": true, 00:04:50.350 "claim_type": "exclusive_write", 00:04:50.350 "zoned": false, 00:04:50.350 "supported_io_types": { 00:04:50.350 "read": true, 00:04:50.350 "write": true, 00:04:50.350 "unmap": true, 00:04:50.350 "write_zeroes": true, 00:04:50.350 "flush": true, 00:04:50.350 "reset": true, 00:04:50.350 "compare": false, 00:04:50.350 "compare_and_write": false, 00:04:50.350 "abort": true, 00:04:50.350 "nvme_admin": false, 00:04:50.350 "nvme_io": false 00:04:50.350 }, 00:04:50.350 "memory_domains": [ 00:04:50.350 { 00:04:50.350 "dma_device_id": "system", 00:04:50.350 "dma_device_type": 1 00:04:50.350 }, 00:04:50.350 { 00:04:50.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.350 "dma_device_type": 2 00:04:50.350 } 00:04:50.350 ], 00:04:50.350 "driver_specific": {} 00:04:50.350 }, 00:04:50.350 { 00:04:50.350 "name": "Passthru0", 00:04:50.350 "aliases": [ 00:04:50.350 "9d15bdec-8978-592a-b43d-87149ac60126" 00:04:50.350 ], 00:04:50.350 "product_name": "passthru", 00:04:50.350 "block_size": 512, 00:04:50.350 "num_blocks": 16384, 00:04:50.350 "uuid": "9d15bdec-8978-592a-b43d-87149ac60126", 00:04:50.350 "assigned_rate_limits": { 00:04:50.350 "rw_ios_per_sec": 0, 00:04:50.350 "rw_mbytes_per_sec": 0, 00:04:50.350 "r_mbytes_per_sec": 0, 00:04:50.350 "w_mbytes_per_sec": 0 00:04:50.350 }, 00:04:50.350 "claimed": false, 00:04:50.350 "zoned": false, 00:04:50.350 "supported_io_types": { 00:04:50.350 "read": true, 00:04:50.350 "write": true, 00:04:50.350 "unmap": true, 00:04:50.350 "write_zeroes": true, 00:04:50.350 "flush": true, 00:04:50.350 "reset": true, 00:04:50.350 "compare": false, 00:04:50.350 "compare_and_write": false, 00:04:50.350 "abort": true, 00:04:50.350 "nvme_admin": false, 00:04:50.350 "nvme_io": false 00:04:50.350 }, 00:04:50.350 "memory_domains": [ 00:04:50.350 { 00:04:50.350 "dma_device_id": "system", 00:04:50.350 "dma_device_type": 1 00:04:50.350 }, 00:04:50.350 { 00:04:50.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.350 "dma_device_type": 2 00:04:50.350 } 00:04:50.350 ], 00:04:50.350 "driver_specific": { 00:04:50.350 "passthru": { 00:04:50.350 "name": "Passthru0", 00:04:50.350 "base_bdev_name": "Malloc2" 00:04:50.350 } 00:04:50.350 } 00:04:50.350 } 00:04:50.350 ]' 00:04:50.350 11:28:20 -- rpc/rpc.sh@21 -- # jq length 00:04:50.350 11:28:20 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.350 11:28:20 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.350 11:28:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.350 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.350 11:28:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.350 11:28:20 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:50.350 11:28:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.350 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.350 11:28:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.350 11:28:20 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.350 11:28:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.350 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.350 11:28:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.350 11:28:20 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.350 11:28:20 -- rpc/rpc.sh@26 -- # jq length 00:04:50.350 11:28:21 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.350 00:04:50.350 real 0m0.289s 00:04:50.350 user 0m0.170s 00:04:50.350 sys 0m0.060s 00:04:50.350 11:28:21 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.350 11:28:21 -- common/autotest_common.sh@10 -- # set +x 00:04:50.350 ************************************ 00:04:50.350 END TEST rpc_daemon_integrity 00:04:50.350 ************************************ 00:04:50.350 11:28:21 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:50.350 11:28:21 -- rpc/rpc.sh@84 -- # killprocess 2883807 00:04:50.350 11:28:21 -- common/autotest_common.sh@946 -- # '[' -z 2883807 ']' 00:04:50.350 11:28:21 -- common/autotest_common.sh@950 -- # kill -0 2883807 00:04:50.350 11:28:21 -- common/autotest_common.sh@951 -- # uname 00:04:50.350 11:28:21 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:50.350 11:28:21 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2883807 00:04:50.610 11:28:21 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:50.610 11:28:21 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:50.610 11:28:21 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2883807' 00:04:50.610 killing process with pid 2883807 00:04:50.610 11:28:21 -- common/autotest_common.sh@965 -- # kill 2883807 00:04:50.610 11:28:21 -- common/autotest_common.sh@970 -- # wait 2883807 00:04:50.870 00:04:50.870 real 0m2.671s 00:04:50.870 user 0m3.333s 00:04:50.870 sys 0m0.834s 00:04:50.870 11:28:21 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.870 11:28:21 -- common/autotest_common.sh@10 -- # set +x 00:04:50.870 ************************************ 00:04:50.870 END TEST rpc 00:04:50.870 ************************************ 00:04:50.870 11:28:21 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:50.870 11:28:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.870 11:28:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.870 11:28:21 -- common/autotest_common.sh@10 -- # set +x 00:04:50.870 ************************************ 00:04:50.870 START TEST skip_rpc 00:04:50.870 ************************************ 00:04:50.870 11:28:21 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:51.130 * Looking for test storage... 00:04:51.130 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:51.130 11:28:21 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:51.130 11:28:21 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:51.130 11:28:21 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:51.130 11:28:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:51.130 11:28:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:51.130 11:28:21 -- common/autotest_common.sh@10 -- # set +x 00:04:51.130 ************************************ 00:04:51.130 START TEST skip_rpc 00:04:51.130 ************************************ 00:04:51.130 11:28:21 -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:04:51.130 11:28:21 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2884353 00:04:51.130 11:28:21 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.130 11:28:21 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:51.130 11:28:21 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:51.130 [2024-05-15 11:28:21.784928] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:04:51.130 [2024-05-15 11:28:21.784980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884353 ] 00:04:51.130 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.130 [2024-05-15 11:28:21.855261] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.389 [2024-05-15 11:28:21.938522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.665 11:28:26 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:56.665 11:28:26 -- common/autotest_common.sh@648 -- # local es=0 00:04:56.665 11:28:26 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:56.665 11:28:26 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:56.665 11:28:26 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:56.665 11:28:26 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:56.665 11:28:26 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:56.665 11:28:26 -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:56.665 11:28:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.665 11:28:26 -- common/autotest_common.sh@10 -- # set +x 00:04:56.665 11:28:26 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:56.665 11:28:26 -- common/autotest_common.sh@651 -- # es=1 00:04:56.665 11:28:26 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:56.665 11:28:26 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:56.665 11:28:26 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:56.665 11:28:26 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:56.665 11:28:26 -- rpc/skip_rpc.sh@23 -- # killprocess 2884353 00:04:56.665 11:28:26 -- common/autotest_common.sh@946 -- # '[' -z 2884353 ']' 00:04:56.665 11:28:26 -- common/autotest_common.sh@950 -- # kill -0 2884353 00:04:56.665 11:28:26 -- common/autotest_common.sh@951 -- # uname 00:04:56.665 11:28:26 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:56.665 11:28:26 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2884353 00:04:56.665 11:28:26 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:56.665 11:28:26 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:56.665 11:28:26 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2884353' 00:04:56.665 killing process with pid 2884353 00:04:56.665 11:28:26 -- common/autotest_common.sh@965 -- # kill 2884353 00:04:56.665 11:28:26 -- common/autotest_common.sh@970 -- # wait 2884353 00:04:56.665 00:04:56.665 real 0m5.447s 00:04:56.665 user 0m5.176s 00:04:56.665 sys 0m0.304s 00:04:56.665 11:28:27 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.665 11:28:27 -- common/autotest_common.sh@10 -- # set +x 00:04:56.665 ************************************ 00:04:56.665 END TEST skip_rpc 00:04:56.665 ************************************ 00:04:56.665 11:28:27 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:56.665 11:28:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.665 11:28:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.665 11:28:27 -- common/autotest_common.sh@10 -- # set +x 00:04:56.665 ************************************ 00:04:56.665 START TEST skip_rpc_with_json 00:04:56.665 ************************************ 00:04:56.665 11:28:27 -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:04:56.665 11:28:27 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:56.665 11:28:27 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2885108 00:04:56.665 11:28:27 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.665 11:28:27 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.665 11:28:27 -- rpc/skip_rpc.sh@31 -- # waitforlisten 2885108 00:04:56.665 11:28:27 -- common/autotest_common.sh@827 -- # '[' -z 2885108 ']' 00:04:56.665 11:28:27 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.665 11:28:27 -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:56.665 11:28:27 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.665 11:28:27 -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:56.665 11:28:27 -- common/autotest_common.sh@10 -- # set +x 00:04:56.665 [2024-05-15 11:28:27.332226] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:04:56.665 [2024-05-15 11:28:27.332286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885108 ] 00:04:56.665 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.665 [2024-05-15 11:28:27.406928] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.924 [2024-05-15 11:28:27.501571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.492 11:28:28 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:57.492 11:28:28 -- common/autotest_common.sh@860 -- # return 0 00:04:57.492 11:28:28 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:57.492 11:28:28 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.492 11:28:28 -- common/autotest_common.sh@10 -- # set +x 00:04:57.492 [2024-05-15 11:28:28.147122] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:57.492 request: 00:04:57.492 { 00:04:57.492 "trtype": "tcp", 00:04:57.492 "method": "nvmf_get_transports", 00:04:57.492 "req_id": 1 00:04:57.492 } 00:04:57.492 Got JSON-RPC error response 00:04:57.492 response: 00:04:57.492 { 00:04:57.492 "code": -19, 00:04:57.492 "message": "No such device" 00:04:57.492 } 00:04:57.492 11:28:28 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:57.492 11:28:28 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:57.492 11:28:28 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.492 11:28:28 -- common/autotest_common.sh@10 -- # set +x 00:04:57.492 [2024-05-15 11:28:28.159220] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:57.492 11:28:28 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.492 11:28:28 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:57.492 11:28:28 -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.492 11:28:28 -- common/autotest_common.sh@10 -- # set +x 00:04:57.752 11:28:28 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.752 11:28:28 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:57.752 { 00:04:57.752 "subsystems": [ 00:04:57.752 { 00:04:57.752 "subsystem": "keyring", 00:04:57.752 "config": [] 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "subsystem": "iobuf", 00:04:57.752 "config": [ 00:04:57.752 { 00:04:57.752 "method": "iobuf_set_options", 00:04:57.752 "params": { 00:04:57.752 "small_pool_count": 8192, 00:04:57.752 "large_pool_count": 1024, 00:04:57.752 "small_bufsize": 8192, 00:04:57.752 "large_bufsize": 135168 00:04:57.752 } 00:04:57.752 } 00:04:57.752 ] 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "subsystem": "sock", 00:04:57.752 "config": [ 00:04:57.752 { 00:04:57.752 "method": "sock_impl_set_options", 00:04:57.752 "params": { 00:04:57.752 "impl_name": "posix", 00:04:57.752 "recv_buf_size": 2097152, 00:04:57.752 "send_buf_size": 2097152, 00:04:57.752 "enable_recv_pipe": true, 00:04:57.752 "enable_quickack": false, 00:04:57.752 "enable_placement_id": 0, 00:04:57.752 "enable_zerocopy_send_server": true, 00:04:57.752 "enable_zerocopy_send_client": false, 00:04:57.752 "zerocopy_threshold": 0, 00:04:57.752 "tls_version": 0, 00:04:57.752 "enable_ktls": false 00:04:57.752 } 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "method": "sock_impl_set_options", 00:04:57.752 "params": { 00:04:57.752 "impl_name": "ssl", 00:04:57.752 "recv_buf_size": 4096, 00:04:57.752 "send_buf_size": 4096, 00:04:57.752 "enable_recv_pipe": true, 00:04:57.752 "enable_quickack": false, 00:04:57.752 "enable_placement_id": 0, 00:04:57.752 "enable_zerocopy_send_server": true, 00:04:57.752 "enable_zerocopy_send_client": false, 00:04:57.752 "zerocopy_threshold": 0, 00:04:57.752 "tls_version": 0, 00:04:57.752 "enable_ktls": false 00:04:57.752 } 00:04:57.752 } 00:04:57.752 ] 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "subsystem": "vmd", 00:04:57.752 "config": [] 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "subsystem": "accel", 00:04:57.752 "config": [ 00:04:57.752 { 00:04:57.752 "method": "accel_set_options", 00:04:57.752 "params": { 00:04:57.752 "small_cache_size": 128, 00:04:57.752 "large_cache_size": 16, 00:04:57.752 "task_count": 2048, 00:04:57.752 "sequence_count": 2048, 00:04:57.752 "buf_count": 2048 00:04:57.752 } 00:04:57.752 } 00:04:57.752 ] 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "subsystem": "bdev", 00:04:57.752 "config": [ 00:04:57.752 { 00:04:57.752 "method": "bdev_set_options", 00:04:57.752 "params": { 00:04:57.752 "bdev_io_pool_size": 65535, 00:04:57.752 "bdev_io_cache_size": 256, 00:04:57.752 "bdev_auto_examine": true, 00:04:57.752 "iobuf_small_cache_size": 128, 00:04:57.752 "iobuf_large_cache_size": 16 00:04:57.752 } 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "method": "bdev_raid_set_options", 00:04:57.752 "params": { 00:04:57.752 "process_window_size_kb": 1024 00:04:57.752 } 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "method": "bdev_iscsi_set_options", 00:04:57.752 "params": { 00:04:57.752 "timeout_sec": 30 00:04:57.752 } 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "method": "bdev_nvme_set_options", 00:04:57.752 "params": { 00:04:57.752 "action_on_timeout": "none", 00:04:57.752 "timeout_us": 0, 00:04:57.752 "timeout_admin_us": 0, 00:04:57.752 "keep_alive_timeout_ms": 10000, 00:04:57.752 "arbitration_burst": 0, 00:04:57.752 "low_priority_weight": 0, 00:04:57.752 "medium_priority_weight": 0, 00:04:57.752 "high_priority_weight": 0, 00:04:57.752 "nvme_adminq_poll_period_us": 10000, 00:04:57.752 "nvme_ioq_poll_period_us": 0, 00:04:57.752 "io_queue_requests": 0, 00:04:57.752 "delay_cmd_submit": true, 00:04:57.752 "transport_retry_count": 4, 00:04:57.752 "bdev_retry_count": 3, 00:04:57.752 "transport_ack_timeout": 0, 00:04:57.752 "ctrlr_loss_timeout_sec": 0, 00:04:57.752 "reconnect_delay_sec": 0, 00:04:57.752 "fast_io_fail_timeout_sec": 0, 00:04:57.752 "disable_auto_failback": false, 00:04:57.752 "generate_uuids": false, 00:04:57.752 "transport_tos": 0, 00:04:57.752 "nvme_error_stat": false, 00:04:57.752 "rdma_srq_size": 0, 00:04:57.752 "io_path_stat": false, 00:04:57.752 "allow_accel_sequence": false, 00:04:57.752 "rdma_max_cq_size": 0, 00:04:57.752 "rdma_cm_event_timeout_ms": 0, 00:04:57.752 "dhchap_digests": [ 00:04:57.752 "sha256", 00:04:57.752 "sha384", 00:04:57.752 "sha512" 00:04:57.752 ], 00:04:57.752 "dhchap_dhgroups": [ 00:04:57.752 "null", 00:04:57.752 "ffdhe2048", 00:04:57.752 "ffdhe3072", 00:04:57.752 "ffdhe4096", 00:04:57.752 "ffdhe6144", 00:04:57.752 "ffdhe8192" 00:04:57.752 ] 00:04:57.752 } 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "method": "bdev_nvme_set_hotplug", 00:04:57.752 "params": { 00:04:57.752 "period_us": 100000, 00:04:57.752 "enable": false 00:04:57.752 } 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "method": "bdev_wait_for_examine" 00:04:57.752 } 00:04:57.752 ] 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "subsystem": "scsi", 00:04:57.752 "config": null 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "subsystem": "scheduler", 00:04:57.752 "config": [ 00:04:57.752 { 00:04:57.752 "method": "framework_set_scheduler", 00:04:57.752 "params": { 00:04:57.752 "name": "static" 00:04:57.752 } 00:04:57.752 } 00:04:57.752 ] 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "subsystem": "vhost_scsi", 00:04:57.752 "config": [] 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "subsystem": "vhost_blk", 00:04:57.752 "config": [] 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "subsystem": "ublk", 00:04:57.752 "config": [] 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "subsystem": "nbd", 00:04:57.752 "config": [] 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "subsystem": "nvmf", 00:04:57.752 "config": [ 00:04:57.752 { 00:04:57.752 "method": "nvmf_set_config", 00:04:57.752 "params": { 00:04:57.752 "discovery_filter": "match_any", 00:04:57.752 "admin_cmd_passthru": { 00:04:57.752 "identify_ctrlr": false 00:04:57.752 } 00:04:57.752 } 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "method": "nvmf_set_max_subsystems", 00:04:57.752 "params": { 00:04:57.752 "max_subsystems": 1024 00:04:57.752 } 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "method": "nvmf_set_crdt", 00:04:57.752 "params": { 00:04:57.752 "crdt1": 0, 00:04:57.752 "crdt2": 0, 00:04:57.752 "crdt3": 0 00:04:57.752 } 00:04:57.752 }, 00:04:57.752 { 00:04:57.752 "method": "nvmf_create_transport", 00:04:57.752 "params": { 00:04:57.752 "trtype": "TCP", 00:04:57.752 "max_queue_depth": 128, 00:04:57.752 "max_io_qpairs_per_ctrlr": 127, 00:04:57.752 "in_capsule_data_size": 4096, 00:04:57.752 "max_io_size": 131072, 00:04:57.752 "io_unit_size": 131072, 00:04:57.752 "max_aq_depth": 128, 00:04:57.752 "num_shared_buffers": 511, 00:04:57.752 "buf_cache_size": 4294967295, 00:04:57.752 "dif_insert_or_strip": false, 00:04:57.752 "zcopy": false, 00:04:57.752 "c2h_success": true, 00:04:57.752 "sock_priority": 0, 00:04:57.752 "abort_timeout_sec": 1, 00:04:57.752 "ack_timeout": 0, 00:04:57.753 "data_wr_pool_size": 0 00:04:57.753 } 00:04:57.753 } 00:04:57.753 ] 00:04:57.753 }, 00:04:57.753 { 00:04:57.753 "subsystem": "iscsi", 00:04:57.753 "config": [ 00:04:57.753 { 00:04:57.753 "method": "iscsi_set_options", 00:04:57.753 "params": { 00:04:57.753 "node_base": "iqn.2016-06.io.spdk", 00:04:57.753 "max_sessions": 128, 00:04:57.753 "max_connections_per_session": 2, 00:04:57.753 "max_queue_depth": 64, 00:04:57.753 "default_time2wait": 2, 00:04:57.753 "default_time2retain": 20, 00:04:57.753 "first_burst_length": 8192, 00:04:57.753 "immediate_data": true, 00:04:57.753 "allow_duplicated_isid": false, 00:04:57.753 "error_recovery_level": 0, 00:04:57.753 "nop_timeout": 60, 00:04:57.753 "nop_in_interval": 30, 00:04:57.753 "disable_chap": false, 00:04:57.753 "require_chap": false, 00:04:57.753 "mutual_chap": false, 00:04:57.753 "chap_group": 0, 00:04:57.753 "max_large_datain_per_connection": 64, 00:04:57.753 "max_r2t_per_connection": 4, 00:04:57.753 "pdu_pool_size": 36864, 00:04:57.753 "immediate_data_pool_size": 16384, 00:04:57.753 "data_out_pool_size": 2048 00:04:57.753 } 00:04:57.753 } 00:04:57.753 ] 00:04:57.753 } 00:04:57.753 ] 00:04:57.753 } 00:04:57.753 11:28:28 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:57.753 11:28:28 -- rpc/skip_rpc.sh@40 -- # killprocess 2885108 00:04:57.753 11:28:28 -- common/autotest_common.sh@946 -- # '[' -z 2885108 ']' 00:04:57.753 11:28:28 -- common/autotest_common.sh@950 -- # kill -0 2885108 00:04:57.753 11:28:28 -- common/autotest_common.sh@951 -- # uname 00:04:57.753 11:28:28 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:57.753 11:28:28 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2885108 00:04:57.753 11:28:28 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:57.753 11:28:28 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:57.753 11:28:28 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2885108' 00:04:57.753 killing process with pid 2885108 00:04:57.753 11:28:28 -- common/autotest_common.sh@965 -- # kill 2885108 00:04:57.753 11:28:28 -- common/autotest_common.sh@970 -- # wait 2885108 00:04:58.011 11:28:28 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2885303 00:04:58.011 11:28:28 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:58.011 11:28:28 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:03.284 11:28:33 -- rpc/skip_rpc.sh@50 -- # killprocess 2885303 00:05:03.284 11:28:33 -- common/autotest_common.sh@946 -- # '[' -z 2885303 ']' 00:05:03.284 11:28:33 -- common/autotest_common.sh@950 -- # kill -0 2885303 00:05:03.284 11:28:33 -- common/autotest_common.sh@951 -- # uname 00:05:03.284 11:28:33 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:03.284 11:28:33 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2885303 00:05:03.284 11:28:33 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:03.284 11:28:33 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:03.284 11:28:33 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2885303' 00:05:03.284 killing process with pid 2885303 00:05:03.284 11:28:33 -- common/autotest_common.sh@965 -- # kill 2885303 00:05:03.284 11:28:33 -- common/autotest_common.sh@970 -- # wait 2885303 00:05:03.543 11:28:34 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:03.543 11:28:34 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:03.543 00:05:03.543 real 0m6.893s 00:05:03.543 user 0m6.630s 00:05:03.543 sys 0m0.698s 00:05:03.543 11:28:34 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.543 11:28:34 -- common/autotest_common.sh@10 -- # set +x 00:05:03.543 ************************************ 00:05:03.543 END TEST skip_rpc_with_json 00:05:03.543 ************************************ 00:05:03.543 11:28:34 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:03.543 11:28:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.543 11:28:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.543 11:28:34 -- common/autotest_common.sh@10 -- # set +x 00:05:03.543 ************************************ 00:05:03.543 START TEST skip_rpc_with_delay 00:05:03.543 ************************************ 00:05:03.543 11:28:34 -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:03.543 11:28:34 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:03.543 11:28:34 -- common/autotest_common.sh@648 -- # local es=0 00:05:03.543 11:28:34 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:03.543 11:28:34 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.543 11:28:34 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.543 11:28:34 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.543 11:28:34 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.543 11:28:34 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.543 11:28:34 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.543 11:28:34 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.543 11:28:34 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:03.543 11:28:34 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:03.801 [2024-05-15 11:28:34.312329] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:03.801 [2024-05-15 11:28:34.312408] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:03.801 11:28:34 -- common/autotest_common.sh@651 -- # es=1 00:05:03.801 11:28:34 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:03.801 11:28:34 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:03.801 11:28:34 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:03.801 00:05:03.801 real 0m0.070s 00:05:03.801 user 0m0.044s 00:05:03.801 sys 0m0.026s 00:05:03.801 11:28:34 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.801 11:28:34 -- common/autotest_common.sh@10 -- # set +x 00:05:03.801 ************************************ 00:05:03.801 END TEST skip_rpc_with_delay 00:05:03.801 ************************************ 00:05:03.801 11:28:34 -- rpc/skip_rpc.sh@77 -- # uname 00:05:03.801 11:28:34 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:03.801 11:28:34 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:03.801 11:28:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.801 11:28:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.801 11:28:34 -- common/autotest_common.sh@10 -- # set +x 00:05:03.801 ************************************ 00:05:03.801 START TEST exit_on_failed_rpc_init 00:05:03.801 ************************************ 00:05:03.801 11:28:34 -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:03.801 11:28:34 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.801 11:28:34 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2886081 00:05:03.801 11:28:34 -- rpc/skip_rpc.sh@63 -- # waitforlisten 2886081 00:05:03.801 11:28:34 -- common/autotest_common.sh@827 -- # '[' -z 2886081 ']' 00:05:03.801 11:28:34 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.801 11:28:34 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:03.801 11:28:34 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.801 11:28:34 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:03.801 11:28:34 -- common/autotest_common.sh@10 -- # set +x 00:05:03.801 [2024-05-15 11:28:34.451383] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:05:03.801 [2024-05-15 11:28:34.451433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886081 ] 00:05:03.801 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.801 [2024-05-15 11:28:34.516242] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.060 [2024-05-15 11:28:34.607684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.629 11:28:35 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:04.629 11:28:35 -- common/autotest_common.sh@860 -- # return 0 00:05:04.629 11:28:35 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.629 11:28:35 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:04.629 11:28:35 -- common/autotest_common.sh@648 -- # local es=0 00:05:04.629 11:28:35 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:04.629 11:28:35 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.629 11:28:35 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:04.629 11:28:35 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.629 11:28:35 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:04.629 11:28:35 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.629 11:28:35 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:04.629 11:28:35 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.629 11:28:35 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:04.629 11:28:35 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:04.629 [2024-05-15 11:28:35.309948] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:05:04.629 [2024-05-15 11:28:35.310005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886262 ] 00:05:04.629 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.629 [2024-05-15 11:28:35.379192] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.888 [2024-05-15 11:28:35.464201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.888 [2024-05-15 11:28:35.464280] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:04.889 [2024-05-15 11:28:35.464292] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:04.889 [2024-05-15 11:28:35.464301] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:04.889 11:28:35 -- common/autotest_common.sh@651 -- # es=234 00:05:04.889 11:28:35 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:04.889 11:28:35 -- common/autotest_common.sh@660 -- # es=106 00:05:04.889 11:28:35 -- common/autotest_common.sh@661 -- # case "$es" in 00:05:04.889 11:28:35 -- common/autotest_common.sh@668 -- # es=1 00:05:04.889 11:28:35 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:04.889 11:28:35 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:04.889 11:28:35 -- rpc/skip_rpc.sh@70 -- # killprocess 2886081 00:05:04.889 11:28:35 -- common/autotest_common.sh@946 -- # '[' -z 2886081 ']' 00:05:04.889 11:28:35 -- common/autotest_common.sh@950 -- # kill -0 2886081 00:05:04.889 11:28:35 -- common/autotest_common.sh@951 -- # uname 00:05:04.889 11:28:35 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:04.889 11:28:35 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2886081 00:05:04.889 11:28:35 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:04.889 11:28:35 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:04.889 11:28:35 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2886081' 00:05:04.889 killing process with pid 2886081 00:05:04.889 11:28:35 -- common/autotest_common.sh@965 -- # kill 2886081 00:05:04.889 11:28:35 -- common/autotest_common.sh@970 -- # wait 2886081 00:05:05.459 00:05:05.459 real 0m1.566s 00:05:05.459 user 0m1.782s 00:05:05.459 sys 0m0.460s 00:05:05.459 11:28:35 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.459 11:28:35 -- common/autotest_common.sh@10 -- # set +x 00:05:05.459 ************************************ 00:05:05.459 END TEST exit_on_failed_rpc_init 00:05:05.459 ************************************ 00:05:05.459 11:28:36 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:05.459 00:05:05.459 real 0m14.436s 00:05:05.459 user 0m13.808s 00:05:05.459 sys 0m1.779s 00:05:05.459 11:28:36 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.459 11:28:36 -- common/autotest_common.sh@10 -- # set +x 00:05:05.459 ************************************ 00:05:05.459 END TEST skip_rpc 00:05:05.459 ************************************ 00:05:05.459 11:28:36 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:05.459 11:28:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.459 11:28:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.459 11:28:36 -- common/autotest_common.sh@10 -- # set +x 00:05:05.459 ************************************ 00:05:05.459 START TEST rpc_client 00:05:05.459 ************************************ 00:05:05.459 11:28:36 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:05.459 * Looking for test storage... 00:05:05.459 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:05.459 11:28:36 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:05.459 OK 00:05:05.459 11:28:36 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:05.459 00:05:05.459 real 0m0.100s 00:05:05.459 user 0m0.034s 00:05:05.459 sys 0m0.073s 00:05:05.459 11:28:36 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.459 11:28:36 -- common/autotest_common.sh@10 -- # set +x 00:05:05.459 ************************************ 00:05:05.459 END TEST rpc_client 00:05:05.459 ************************************ 00:05:05.718 11:28:36 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:05.718 11:28:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.718 11:28:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.718 11:28:36 -- common/autotest_common.sh@10 -- # set +x 00:05:05.718 ************************************ 00:05:05.718 START TEST json_config 00:05:05.718 ************************************ 00:05:05.718 11:28:36 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:05.718 11:28:36 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:05.718 11:28:36 -- nvmf/common.sh@7 -- # uname -s 00:05:05.718 11:28:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.718 11:28:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.718 11:28:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.718 11:28:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.718 11:28:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.718 11:28:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.718 11:28:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.718 11:28:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.718 11:28:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.718 11:28:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.718 11:28:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:05:05.718 11:28:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:05:05.718 11:28:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.718 11:28:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.718 11:28:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:05.718 11:28:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:05.719 11:28:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:05.719 11:28:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.719 11:28:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.719 11:28:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.719 11:28:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.719 11:28:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.719 11:28:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.719 11:28:36 -- paths/export.sh@5 -- # export PATH 00:05:05.719 11:28:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.719 11:28:36 -- nvmf/common.sh@47 -- # : 0 00:05:05.719 11:28:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:05.719 11:28:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:05.719 11:28:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:05.719 11:28:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.719 11:28:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.719 11:28:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:05.719 11:28:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:05.719 11:28:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:05.719 11:28:36 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:05.719 11:28:36 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:05.719 11:28:36 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:05.719 11:28:36 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:05.719 11:28:36 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:05.719 11:28:36 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:05.719 11:28:36 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:05.719 11:28:36 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:05.719 11:28:36 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:05.719 11:28:36 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:05.719 11:28:36 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:05.719 11:28:36 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:05.719 11:28:36 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:05.719 11:28:36 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:05.719 11:28:36 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:05.719 11:28:36 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:05.719 INFO: JSON configuration test init 00:05:05.719 11:28:36 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:05.719 11:28:36 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:05.719 11:28:36 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:05.719 11:28:36 -- common/autotest_common.sh@10 -- # set +x 00:05:05.719 11:28:36 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:05.719 11:28:36 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:05.719 11:28:36 -- common/autotest_common.sh@10 -- # set +x 00:05:05.719 11:28:36 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:05.719 11:28:36 -- json_config/common.sh@9 -- # local app=target 00:05:05.719 11:28:36 -- json_config/common.sh@10 -- # shift 00:05:05.719 11:28:36 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:05.719 11:28:36 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:05.719 11:28:36 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:05.719 11:28:36 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.719 11:28:36 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.719 11:28:36 -- json_config/common.sh@22 -- # app_pid["$app"]=2886550 00:05:05.719 11:28:36 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:05.719 Waiting for target to run... 00:05:05.719 11:28:36 -- json_config/common.sh@25 -- # waitforlisten 2886550 /var/tmp/spdk_tgt.sock 00:05:05.719 11:28:36 -- common/autotest_common.sh@827 -- # '[' -z 2886550 ']' 00:05:05.719 11:28:36 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:05.719 11:28:36 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:05.719 11:28:36 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:05.719 11:28:36 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:05.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:05.719 11:28:36 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:05.719 11:28:36 -- common/autotest_common.sh@10 -- # set +x 00:05:05.719 [2024-05-15 11:28:36.451767] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:05:05.719 [2024-05-15 11:28:36.451822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886550 ] 00:05:05.978 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.238 [2024-05-15 11:28:36.749594] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.238 [2024-05-15 11:28:36.825137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.498 11:28:37 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:06.498 11:28:37 -- common/autotest_common.sh@860 -- # return 0 00:05:06.498 11:28:37 -- json_config/common.sh@26 -- # echo '' 00:05:06.498 00:05:06.498 11:28:37 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:06.498 11:28:37 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:06.498 11:28:37 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:06.499 11:28:37 -- common/autotest_common.sh@10 -- # set +x 00:05:06.499 11:28:37 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:06.499 11:28:37 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:06.499 11:28:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.499 11:28:37 -- common/autotest_common.sh@10 -- # set +x 00:05:06.758 11:28:37 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:06.758 11:28:37 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:06.758 11:28:37 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:10.049 11:28:40 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:10.049 11:28:40 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:10.049 11:28:40 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:10.049 11:28:40 -- common/autotest_common.sh@10 -- # set +x 00:05:10.049 11:28:40 -- json_config/json_config.sh@45 -- # local ret=0 00:05:10.049 11:28:40 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:10.049 11:28:40 -- json_config/json_config.sh@46 -- # local enabled_types 00:05:10.049 11:28:40 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:10.049 11:28:40 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:10.049 11:28:40 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:10.049 11:28:40 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:10.049 11:28:40 -- json_config/json_config.sh@48 -- # local get_types 00:05:10.049 11:28:40 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:10.049 11:28:40 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:10.049 11:28:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.049 11:28:40 -- common/autotest_common.sh@10 -- # set +x 00:05:10.049 11:28:40 -- json_config/json_config.sh@55 -- # return 0 00:05:10.049 11:28:40 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:10.049 11:28:40 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:10.049 11:28:40 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:10.049 11:28:40 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:10.049 11:28:40 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:10.049 11:28:40 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:10.049 11:28:40 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:10.049 11:28:40 -- common/autotest_common.sh@10 -- # set +x 00:05:10.049 11:28:40 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:10.050 11:28:40 -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:05:10.050 11:28:40 -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:05:10.050 11:28:40 -- json_config/json_config.sh@234 -- # nvmftestinit 00:05:10.050 11:28:40 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:05:10.050 11:28:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:10.050 11:28:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:05:10.050 11:28:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:05:10.050 11:28:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:05:10.050 11:28:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:10.050 11:28:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:10.050 11:28:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:10.050 11:28:40 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:05:10.050 11:28:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:05:10.050 11:28:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:05:10.050 11:28:40 -- common/autotest_common.sh@10 -- # set +x 00:05:16.624 11:28:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:16.624 11:28:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:05:16.624 11:28:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:16.624 11:28:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:16.624 11:28:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:16.624 11:28:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:16.624 11:28:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:16.624 11:28:46 -- nvmf/common.sh@295 -- # net_devs=() 00:05:16.624 11:28:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:16.624 11:28:46 -- nvmf/common.sh@296 -- # e810=() 00:05:16.624 11:28:46 -- nvmf/common.sh@296 -- # local -ga e810 00:05:16.624 11:28:46 -- nvmf/common.sh@297 -- # x722=() 00:05:16.624 11:28:46 -- nvmf/common.sh@297 -- # local -ga x722 00:05:16.624 11:28:46 -- nvmf/common.sh@298 -- # mlx=() 00:05:16.624 11:28:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:05:16.624 11:28:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:16.624 11:28:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:16.624 11:28:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:16.624 11:28:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:16.624 11:28:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:16.624 11:28:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:16.624 11:28:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:16.624 11:28:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:16.624 11:28:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:16.624 11:28:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:16.624 11:28:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:16.624 11:28:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:16.624 11:28:46 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:16.624 11:28:46 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:16.624 11:28:46 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:05:16.624 11:28:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:16.624 11:28:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:16.624 11:28:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:05:16.624 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:05:16.624 11:28:46 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:16.624 11:28:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:16.624 11:28:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:05:16.624 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:05:16.624 11:28:46 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:16.624 11:28:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:16.624 11:28:46 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:16.624 11:28:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.624 11:28:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:16.624 11:28:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.624 11:28:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:05:16.624 Found net devices under 0000:18:00.0: mlx_0_0 00:05:16.624 11:28:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.624 11:28:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:16.624 11:28:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.624 11:28:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:16.624 11:28:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.624 11:28:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:05:16.624 Found net devices under 0000:18:00.1: mlx_0_1 00:05:16.624 11:28:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.624 11:28:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:05:16.624 11:28:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:05:16.624 11:28:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@409 -- # rdma_device_init 00:05:16.624 11:28:46 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:05:16.624 11:28:46 -- nvmf/common.sh@58 -- # uname 00:05:16.624 11:28:46 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:16.624 11:28:46 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:16.624 11:28:46 -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:16.624 11:28:46 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:16.624 11:28:46 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:16.624 11:28:46 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:16.624 11:28:46 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:16.624 11:28:46 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:16.624 11:28:46 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:05:16.624 11:28:46 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:16.624 11:28:46 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:16.624 11:28:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:16.624 11:28:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:16.624 11:28:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:16.624 11:28:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:16.624 11:28:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:16.624 11:28:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:16.624 11:28:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:16.624 11:28:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:16.624 11:28:46 -- nvmf/common.sh@105 -- # continue 2 00:05:16.624 11:28:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:16.624 11:28:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:16.624 11:28:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:16.624 11:28:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:16.624 11:28:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:16.625 11:28:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:16.625 11:28:46 -- nvmf/common.sh@105 -- # continue 2 00:05:16.625 11:28:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:16.625 11:28:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:05:16.625 11:28:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:16.625 11:28:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:16.625 11:28:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:16.625 11:28:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:16.625 11:28:46 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:05:16.625 11:28:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:05:16.625 11:28:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:05:16.625 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:16.625 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:05:16.625 altname enp24s0f0np0 00:05:16.625 altname ens785f0np0 00:05:16.625 inet 192.168.100.8/24 scope global mlx_0_0 00:05:16.625 valid_lft forever preferred_lft forever 00:05:16.625 11:28:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:16.625 11:28:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:05:16.625 11:28:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:16.625 11:28:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:16.625 11:28:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:16.625 11:28:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:16.625 11:28:46 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:05:16.625 11:28:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:05:16.625 11:28:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:05:16.625 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:16.625 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:05:16.625 altname enp24s0f1np1 00:05:16.625 altname ens785f1np1 00:05:16.625 inet 192.168.100.9/24 scope global mlx_0_1 00:05:16.625 valid_lft forever preferred_lft forever 00:05:16.625 11:28:46 -- nvmf/common.sh@411 -- # return 0 00:05:16.625 11:28:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:05:16.625 11:28:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:16.625 11:28:46 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:05:16.625 11:28:46 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:05:16.625 11:28:46 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:16.625 11:28:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:16.625 11:28:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:16.625 11:28:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:16.625 11:28:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:16.625 11:28:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:16.625 11:28:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:16.625 11:28:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:16.625 11:28:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:16.625 11:28:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:16.625 11:28:46 -- nvmf/common.sh@105 -- # continue 2 00:05:16.625 11:28:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:16.625 11:28:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:16.625 11:28:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:16.625 11:28:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:16.625 11:28:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:16.625 11:28:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:16.625 11:28:46 -- nvmf/common.sh@105 -- # continue 2 00:05:16.625 11:28:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:16.625 11:28:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:05:16.625 11:28:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:16.625 11:28:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:16.625 11:28:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:16.625 11:28:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:16.625 11:28:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:16.625 11:28:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:05:16.625 11:28:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:16.625 11:28:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:16.625 11:28:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:16.625 11:28:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:16.625 11:28:46 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:05:16.625 192.168.100.9' 00:05:16.625 11:28:46 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:05:16.625 192.168.100.9' 00:05:16.625 11:28:46 -- nvmf/common.sh@446 -- # head -n 1 00:05:16.625 11:28:46 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:16.625 11:28:46 -- nvmf/common.sh@447 -- # tail -n +2 00:05:16.625 11:28:46 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:05:16.625 192.168.100.9' 00:05:16.625 11:28:46 -- nvmf/common.sh@447 -- # head -n 1 00:05:16.625 11:28:46 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:16.625 11:28:46 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:05:16.625 11:28:46 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:16.625 11:28:46 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:05:16.625 11:28:46 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:05:16.625 11:28:46 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:05:16.625 11:28:46 -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:05:16.625 11:28:46 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:16.625 11:28:46 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:16.625 MallocForNvmf0 00:05:16.625 11:28:47 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:16.625 11:28:47 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:16.625 MallocForNvmf1 00:05:16.625 11:28:47 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:16.625 11:28:47 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:16.883 [2024-05-15 11:28:47.516185] rdma.c:2712:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:16.883 [2024-05-15 11:28:47.544359] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1546490/0x1573100) succeed. 00:05:16.883 [2024-05-15 11:28:47.556664] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1548680/0x15d30c0) succeed. 00:05:16.883 11:28:47 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:16.883 11:28:47 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:17.141 11:28:47 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:17.141 11:28:47 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:17.398 11:28:47 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:17.398 11:28:47 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:17.398 11:28:48 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:17.398 11:28:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:17.657 [2024-05-15 11:28:48.257498] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:17.657 [2024-05-15 11:28:48.257827] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:17.657 11:28:48 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:17.657 11:28:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.657 11:28:48 -- common/autotest_common.sh@10 -- # set +x 00:05:17.657 11:28:48 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:17.657 11:28:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.657 11:28:48 -- common/autotest_common.sh@10 -- # set +x 00:05:17.657 11:28:48 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:17.657 11:28:48 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:17.657 11:28:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:17.916 MallocBdevForConfigChangeCheck 00:05:17.916 11:28:48 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:17.916 11:28:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.916 11:28:48 -- common/autotest_common.sh@10 -- # set +x 00:05:17.916 11:28:48 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:17.916 11:28:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.175 11:28:48 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:18.175 INFO: shutting down applications... 00:05:18.175 11:28:48 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:18.175 11:28:48 -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:18.175 11:28:48 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:18.175 11:28:48 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:26.411 Calling clear_iscsi_subsystem 00:05:26.411 Calling clear_nvmf_subsystem 00:05:26.411 Calling clear_nbd_subsystem 00:05:26.411 Calling clear_ublk_subsystem 00:05:26.411 Calling clear_vhost_blk_subsystem 00:05:26.411 Calling clear_vhost_scsi_subsystem 00:05:26.411 Calling clear_bdev_subsystem 00:05:26.411 11:28:55 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:26.411 11:28:55 -- json_config/json_config.sh@343 -- # count=100 00:05:26.411 11:28:55 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:26.411 11:28:55 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:26.411 11:28:55 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:26.411 11:28:55 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:26.411 11:28:56 -- json_config/json_config.sh@345 -- # break 00:05:26.411 11:28:56 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:26.411 11:28:56 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:26.411 11:28:56 -- json_config/common.sh@31 -- # local app=target 00:05:26.411 11:28:56 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:26.411 11:28:56 -- json_config/common.sh@35 -- # [[ -n 2886550 ]] 00:05:26.411 11:28:56 -- json_config/common.sh@38 -- # kill -SIGINT 2886550 00:05:26.411 [2024-05-15 11:28:56.281659] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:26.411 11:28:56 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:26.411 11:28:56 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.411 11:28:56 -- json_config/common.sh@41 -- # kill -0 2886550 00:05:26.411 11:28:56 -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.411 [2024-05-15 11:28:56.397411] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:05:26.411 11:28:56 -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.411 11:28:56 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.411 11:28:56 -- json_config/common.sh@41 -- # kill -0 2886550 00:05:26.411 11:28:56 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:26.411 11:28:56 -- json_config/common.sh@43 -- # break 00:05:26.411 11:28:56 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:26.411 11:28:56 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:26.411 SPDK target shutdown done 00:05:26.411 11:28:56 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:26.411 INFO: relaunching applications... 00:05:26.411 11:28:56 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.411 11:28:56 -- json_config/common.sh@9 -- # local app=target 00:05:26.411 11:28:56 -- json_config/common.sh@10 -- # shift 00:05:26.411 11:28:56 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.411 11:28:56 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.411 11:28:56 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.411 11:28:56 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.411 11:28:56 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.411 11:28:56 -- json_config/common.sh@22 -- # app_pid["$app"]=2891335 00:05:26.411 11:28:56 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.411 Waiting for target to run... 00:05:26.411 11:28:56 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.411 11:28:56 -- json_config/common.sh@25 -- # waitforlisten 2891335 /var/tmp/spdk_tgt.sock 00:05:26.411 11:28:56 -- common/autotest_common.sh@827 -- # '[' -z 2891335 ']' 00:05:26.411 11:28:56 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.411 11:28:56 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:26.411 11:28:56 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.411 11:28:56 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:26.411 11:28:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.411 [2024-05-15 11:28:56.846551] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:05:26.411 [2024-05-15 11:28:56.846618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891335 ] 00:05:26.411 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.411 [2024-05-15 11:28:57.139207] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.671 [2024-05-15 11:28:57.218139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.957 [2024-05-15 11:29:00.247706] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xff8020/0x1024480) succeed. 00:05:29.957 [2024-05-15 11:29:00.259740] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xffa210/0x1084440) succeed. 00:05:29.957 [2024-05-15 11:29:00.311236] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:29.957 [2024-05-15 11:29:00.311527] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:30.525 11:29:01 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:30.525 11:29:01 -- common/autotest_common.sh@860 -- # return 0 00:05:30.525 11:29:01 -- json_config/common.sh@26 -- # echo '' 00:05:30.525 00:05:30.525 11:29:01 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:30.525 11:29:01 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:30.525 INFO: Checking if target configuration is the same... 00:05:30.525 11:29:01 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:30.525 11:29:01 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:30.525 11:29:01 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.525 + '[' 2 -ne 2 ']' 00:05:30.525 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:30.525 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:30.525 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:30.525 +++ basename /dev/fd/62 00:05:30.525 ++ mktemp /tmp/62.XXX 00:05:30.525 + tmp_file_1=/tmp/62.myr 00:05:30.525 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:30.525 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:30.525 + tmp_file_2=/tmp/spdk_tgt_config.json.7Cw 00:05:30.525 + ret=0 00:05:30.525 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:30.784 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:30.784 + diff -u /tmp/62.myr /tmp/spdk_tgt_config.json.7Cw 00:05:30.784 + echo 'INFO: JSON config files are the same' 00:05:30.784 INFO: JSON config files are the same 00:05:30.784 + rm /tmp/62.myr /tmp/spdk_tgt_config.json.7Cw 00:05:30.784 + exit 0 00:05:30.784 11:29:01 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:30.784 11:29:01 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:30.784 INFO: changing configuration and checking if this can be detected... 00:05:30.784 11:29:01 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:30.784 11:29:01 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:31.044 11:29:01 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:31.044 11:29:01 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:31.044 11:29:01 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:31.044 + '[' 2 -ne 2 ']' 00:05:31.044 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:31.044 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:31.044 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:31.044 +++ basename /dev/fd/62 00:05:31.044 ++ mktemp /tmp/62.XXX 00:05:31.044 + tmp_file_1=/tmp/62.WmL 00:05:31.044 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:31.044 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:31.044 + tmp_file_2=/tmp/spdk_tgt_config.json.PWA 00:05:31.044 + ret=0 00:05:31.044 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:31.303 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:31.303 + diff -u /tmp/62.WmL /tmp/spdk_tgt_config.json.PWA 00:05:31.303 + ret=1 00:05:31.303 + echo '=== Start of file: /tmp/62.WmL ===' 00:05:31.303 + cat /tmp/62.WmL 00:05:31.303 + echo '=== End of file: /tmp/62.WmL ===' 00:05:31.303 + echo '' 00:05:31.303 + echo '=== Start of file: /tmp/spdk_tgt_config.json.PWA ===' 00:05:31.303 + cat /tmp/spdk_tgt_config.json.PWA 00:05:31.303 + echo '=== End of file: /tmp/spdk_tgt_config.json.PWA ===' 00:05:31.303 + echo '' 00:05:31.303 + rm /tmp/62.WmL /tmp/spdk_tgt_config.json.PWA 00:05:31.303 + exit 1 00:05:31.303 11:29:01 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:31.303 INFO: configuration change detected. 00:05:31.303 11:29:01 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:31.303 11:29:01 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:31.303 11:29:01 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:31.303 11:29:01 -- common/autotest_common.sh@10 -- # set +x 00:05:31.303 11:29:01 -- json_config/json_config.sh@307 -- # local ret=0 00:05:31.303 11:29:01 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:31.303 11:29:01 -- json_config/json_config.sh@317 -- # [[ -n 2891335 ]] 00:05:31.303 11:29:01 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:31.303 11:29:01 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:31.303 11:29:01 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:31.303 11:29:01 -- common/autotest_common.sh@10 -- # set +x 00:05:31.303 11:29:01 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:31.303 11:29:01 -- json_config/json_config.sh@193 -- # uname -s 00:05:31.303 11:29:01 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:31.303 11:29:01 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:31.303 11:29:01 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:31.303 11:29:01 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:31.303 11:29:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.303 11:29:01 -- common/autotest_common.sh@10 -- # set +x 00:05:31.303 11:29:02 -- json_config/json_config.sh@323 -- # killprocess 2891335 00:05:31.303 11:29:02 -- common/autotest_common.sh@946 -- # '[' -z 2891335 ']' 00:05:31.303 11:29:02 -- common/autotest_common.sh@950 -- # kill -0 2891335 00:05:31.303 11:29:02 -- common/autotest_common.sh@951 -- # uname 00:05:31.303 11:29:02 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:31.303 11:29:02 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2891335 00:05:31.303 11:29:02 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:31.303 11:29:02 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:31.303 11:29:02 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2891335' 00:05:31.303 killing process with pid 2891335 00:05:31.303 11:29:02 -- common/autotest_common.sh@965 -- # kill 2891335 00:05:31.303 [2024-05-15 11:29:02.062126] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:31.303 11:29:02 -- common/autotest_common.sh@970 -- # wait 2891335 00:05:31.562 [2024-05-15 11:29:02.176583] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:05:39.690 11:29:09 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.690 11:29:09 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:39.690 11:29:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.690 11:29:09 -- common/autotest_common.sh@10 -- # set +x 00:05:39.690 11:29:09 -- json_config/json_config.sh@328 -- # return 0 00:05:39.690 11:29:09 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:39.690 INFO: Success 00:05:39.690 11:29:09 -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:39.691 11:29:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:05:39.691 11:29:09 -- nvmf/common.sh@117 -- # sync 00:05:39.691 11:29:09 -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:05:39.691 11:29:09 -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:05:39.691 11:29:09 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:05:39.691 11:29:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:05:39.691 11:29:09 -- nvmf/common.sh@484 -- # [[ '' == \t\c\p ]] 00:05:39.691 00:05:39.691 real 0m33.008s 00:05:39.691 user 0m35.495s 00:05:39.691 sys 0m6.956s 00:05:39.691 11:29:09 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.691 11:29:09 -- common/autotest_common.sh@10 -- # set +x 00:05:39.691 ************************************ 00:05:39.691 END TEST json_config 00:05:39.691 ************************************ 00:05:39.691 11:29:09 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:39.691 11:29:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.691 11:29:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.691 11:29:09 -- common/autotest_common.sh@10 -- # set +x 00:05:39.691 ************************************ 00:05:39.691 START TEST json_config_extra_key 00:05:39.691 ************************************ 00:05:39.691 11:29:09 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:39.691 11:29:09 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:39.691 11:29:09 -- nvmf/common.sh@7 -- # uname -s 00:05:39.691 11:29:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.691 11:29:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.691 11:29:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.691 11:29:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.691 11:29:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.691 11:29:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.691 11:29:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.691 11:29:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.691 11:29:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.691 11:29:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.691 11:29:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:05:39.691 11:29:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:05:39.691 11:29:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.691 11:29:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.691 11:29:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:39.691 11:29:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:39.691 11:29:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:39.691 11:29:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.691 11:29:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.691 11:29:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.691 11:29:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.691 11:29:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.691 11:29:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.691 11:29:09 -- paths/export.sh@5 -- # export PATH 00:05:39.691 11:29:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.691 11:29:09 -- nvmf/common.sh@47 -- # : 0 00:05:39.691 11:29:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:39.691 11:29:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:39.691 11:29:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:39.691 11:29:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.691 11:29:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.691 11:29:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:39.691 11:29:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:39.691 11:29:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:39.691 11:29:09 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:39.691 11:29:09 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:39.691 11:29:09 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:39.691 11:29:09 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:39.691 11:29:09 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:39.691 11:29:09 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:39.691 11:29:09 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:39.691 11:29:09 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:39.691 11:29:09 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:39.691 11:29:09 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:39.691 11:29:09 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:39.691 INFO: launching applications... 00:05:39.691 11:29:09 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:39.691 11:29:09 -- json_config/common.sh@9 -- # local app=target 00:05:39.691 11:29:09 -- json_config/common.sh@10 -- # shift 00:05:39.691 11:29:09 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:39.691 11:29:09 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:39.691 11:29:09 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:39.691 11:29:09 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.691 11:29:09 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.691 11:29:09 -- json_config/common.sh@22 -- # app_pid["$app"]=2893106 00:05:39.691 11:29:09 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:39.691 Waiting for target to run... 00:05:39.691 11:29:09 -- json_config/common.sh@25 -- # waitforlisten 2893106 /var/tmp/spdk_tgt.sock 00:05:39.691 11:29:09 -- common/autotest_common.sh@827 -- # '[' -z 2893106 ']' 00:05:39.691 11:29:09 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:39.691 11:29:09 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:39.691 11:29:09 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:39.691 11:29:09 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:39.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:39.691 11:29:09 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:39.691 11:29:09 -- common/autotest_common.sh@10 -- # set +x 00:05:39.691 [2024-05-15 11:29:09.533377] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:05:39.691 [2024-05-15 11:29:09.533443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893106 ] 00:05:39.691 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.691 [2024-05-15 11:29:09.841747] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.691 [2024-05-15 11:29:09.917867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.691 11:29:10 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:39.691 11:29:10 -- common/autotest_common.sh@860 -- # return 0 00:05:39.691 11:29:10 -- json_config/common.sh@26 -- # echo '' 00:05:39.691 00:05:39.691 11:29:10 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:39.691 INFO: shutting down applications... 00:05:39.691 11:29:10 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:39.691 11:29:10 -- json_config/common.sh@31 -- # local app=target 00:05:39.691 11:29:10 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:39.691 11:29:10 -- json_config/common.sh@35 -- # [[ -n 2893106 ]] 00:05:39.691 11:29:10 -- json_config/common.sh@38 -- # kill -SIGINT 2893106 00:05:39.691 11:29:10 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:39.691 11:29:10 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.691 11:29:10 -- json_config/common.sh@41 -- # kill -0 2893106 00:05:39.692 11:29:10 -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.261 11:29:10 -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.261 11:29:10 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.261 11:29:10 -- json_config/common.sh@41 -- # kill -0 2893106 00:05:40.261 11:29:10 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:40.261 11:29:10 -- json_config/common.sh@43 -- # break 00:05:40.261 11:29:10 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:40.261 11:29:10 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:40.261 SPDK target shutdown done 00:05:40.261 11:29:10 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:40.261 Success 00:05:40.261 00:05:40.261 real 0m1.453s 00:05:40.261 user 0m1.251s 00:05:40.261 sys 0m0.417s 00:05:40.261 11:29:10 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.261 11:29:10 -- common/autotest_common.sh@10 -- # set +x 00:05:40.261 ************************************ 00:05:40.261 END TEST json_config_extra_key 00:05:40.261 ************************************ 00:05:40.261 11:29:10 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.261 11:29:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.261 11:29:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.261 11:29:10 -- common/autotest_common.sh@10 -- # set +x 00:05:40.261 ************************************ 00:05:40.261 START TEST alias_rpc 00:05:40.261 ************************************ 00:05:40.261 11:29:10 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.261 * Looking for test storage... 00:05:40.520 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:40.520 11:29:11 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:40.520 11:29:11 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2893336 00:05:40.520 11:29:11 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2893336 00:05:40.520 11:29:11 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.520 11:29:11 -- common/autotest_common.sh@827 -- # '[' -z 2893336 ']' 00:05:40.520 11:29:11 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.520 11:29:11 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:40.520 11:29:11 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.520 11:29:11 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:40.520 11:29:11 -- common/autotest_common.sh@10 -- # set +x 00:05:40.520 [2024-05-15 11:29:11.077822] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:05:40.520 [2024-05-15 11:29:11.077886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893336 ] 00:05:40.520 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.520 [2024-05-15 11:29:11.150750] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.520 [2024-05-15 11:29:11.241127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.458 11:29:11 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:41.458 11:29:11 -- common/autotest_common.sh@860 -- # return 0 00:05:41.458 11:29:11 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:41.458 11:29:12 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2893336 00:05:41.458 11:29:12 -- common/autotest_common.sh@946 -- # '[' -z 2893336 ']' 00:05:41.458 11:29:12 -- common/autotest_common.sh@950 -- # kill -0 2893336 00:05:41.458 11:29:12 -- common/autotest_common.sh@951 -- # uname 00:05:41.458 11:29:12 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:41.458 11:29:12 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2893336 00:05:41.458 11:29:12 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:41.458 11:29:12 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:41.458 11:29:12 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2893336' 00:05:41.458 killing process with pid 2893336 00:05:41.458 11:29:12 -- common/autotest_common.sh@965 -- # kill 2893336 00:05:41.458 11:29:12 -- common/autotest_common.sh@970 -- # wait 2893336 00:05:42.026 00:05:42.026 real 0m1.582s 00:05:42.026 user 0m1.672s 00:05:42.026 sys 0m0.457s 00:05:42.026 11:29:12 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.026 11:29:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.026 ************************************ 00:05:42.026 END TEST alias_rpc 00:05:42.026 ************************************ 00:05:42.026 11:29:12 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:42.026 11:29:12 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:42.026 11:29:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.026 11:29:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.026 11:29:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.026 ************************************ 00:05:42.026 START TEST spdkcli_tcp 00:05:42.026 ************************************ 00:05:42.026 11:29:12 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:42.026 * Looking for test storage... 00:05:42.026 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:42.026 11:29:12 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:42.026 11:29:12 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:42.026 11:29:12 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:42.026 11:29:12 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:42.026 11:29:12 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:42.026 11:29:12 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:42.026 11:29:12 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:42.026 11:29:12 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:42.026 11:29:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.026 11:29:12 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2893705 00:05:42.026 11:29:12 -- spdkcli/tcp.sh@27 -- # waitforlisten 2893705 00:05:42.027 11:29:12 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:42.027 11:29:12 -- common/autotest_common.sh@827 -- # '[' -z 2893705 ']' 00:05:42.027 11:29:12 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.027 11:29:12 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:42.027 11:29:12 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.027 11:29:12 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:42.027 11:29:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.027 [2024-05-15 11:29:12.731205] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:05:42.027 [2024-05-15 11:29:12.731272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893705 ] 00:05:42.027 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.285 [2024-05-15 11:29:12.804349] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.285 [2024-05-15 11:29:12.896255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.285 [2024-05-15 11:29:12.896257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.853 11:29:13 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:42.853 11:29:13 -- common/autotest_common.sh@860 -- # return 0 00:05:42.853 11:29:13 -- spdkcli/tcp.sh@31 -- # socat_pid=2893761 00:05:42.853 11:29:13 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:42.853 11:29:13 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:43.112 [ 00:05:43.112 "bdev_malloc_delete", 00:05:43.112 "bdev_malloc_create", 00:05:43.112 "bdev_null_resize", 00:05:43.112 "bdev_null_delete", 00:05:43.112 "bdev_null_create", 00:05:43.112 "bdev_nvme_cuse_unregister", 00:05:43.112 "bdev_nvme_cuse_register", 00:05:43.112 "bdev_opal_new_user", 00:05:43.112 "bdev_opal_set_lock_state", 00:05:43.112 "bdev_opal_delete", 00:05:43.112 "bdev_opal_get_info", 00:05:43.112 "bdev_opal_create", 00:05:43.112 "bdev_nvme_opal_revert", 00:05:43.112 "bdev_nvme_opal_init", 00:05:43.112 "bdev_nvme_send_cmd", 00:05:43.112 "bdev_nvme_get_path_iostat", 00:05:43.112 "bdev_nvme_get_mdns_discovery_info", 00:05:43.112 "bdev_nvme_stop_mdns_discovery", 00:05:43.112 "bdev_nvme_start_mdns_discovery", 00:05:43.112 "bdev_nvme_set_multipath_policy", 00:05:43.112 "bdev_nvme_set_preferred_path", 00:05:43.112 "bdev_nvme_get_io_paths", 00:05:43.112 "bdev_nvme_remove_error_injection", 00:05:43.112 "bdev_nvme_add_error_injection", 00:05:43.112 "bdev_nvme_get_discovery_info", 00:05:43.112 "bdev_nvme_stop_discovery", 00:05:43.112 "bdev_nvme_start_discovery", 00:05:43.112 "bdev_nvme_get_controller_health_info", 00:05:43.112 "bdev_nvme_disable_controller", 00:05:43.112 "bdev_nvme_enable_controller", 00:05:43.112 "bdev_nvme_reset_controller", 00:05:43.112 "bdev_nvme_get_transport_statistics", 00:05:43.112 "bdev_nvme_apply_firmware", 00:05:43.112 "bdev_nvme_detach_controller", 00:05:43.112 "bdev_nvme_get_controllers", 00:05:43.112 "bdev_nvme_attach_controller", 00:05:43.112 "bdev_nvme_set_hotplug", 00:05:43.112 "bdev_nvme_set_options", 00:05:43.112 "bdev_passthru_delete", 00:05:43.112 "bdev_passthru_create", 00:05:43.112 "bdev_lvol_grow_lvstore", 00:05:43.112 "bdev_lvol_get_lvols", 00:05:43.112 "bdev_lvol_get_lvstores", 00:05:43.112 "bdev_lvol_delete", 00:05:43.112 "bdev_lvol_set_read_only", 00:05:43.112 "bdev_lvol_resize", 00:05:43.112 "bdev_lvol_decouple_parent", 00:05:43.112 "bdev_lvol_inflate", 00:05:43.112 "bdev_lvol_rename", 00:05:43.112 "bdev_lvol_clone_bdev", 00:05:43.112 "bdev_lvol_clone", 00:05:43.112 "bdev_lvol_snapshot", 00:05:43.112 "bdev_lvol_create", 00:05:43.112 "bdev_lvol_delete_lvstore", 00:05:43.112 "bdev_lvol_rename_lvstore", 00:05:43.112 "bdev_lvol_create_lvstore", 00:05:43.112 "bdev_raid_set_options", 00:05:43.112 "bdev_raid_remove_base_bdev", 00:05:43.112 "bdev_raid_add_base_bdev", 00:05:43.112 "bdev_raid_delete", 00:05:43.112 "bdev_raid_create", 00:05:43.112 "bdev_raid_get_bdevs", 00:05:43.112 "bdev_error_inject_error", 00:05:43.112 "bdev_error_delete", 00:05:43.112 "bdev_error_create", 00:05:43.112 "bdev_split_delete", 00:05:43.112 "bdev_split_create", 00:05:43.112 "bdev_delay_delete", 00:05:43.112 "bdev_delay_create", 00:05:43.112 "bdev_delay_update_latency", 00:05:43.112 "bdev_zone_block_delete", 00:05:43.112 "bdev_zone_block_create", 00:05:43.112 "blobfs_create", 00:05:43.113 "blobfs_detect", 00:05:43.113 "blobfs_set_cache_size", 00:05:43.113 "bdev_aio_delete", 00:05:43.113 "bdev_aio_rescan", 00:05:43.113 "bdev_aio_create", 00:05:43.113 "bdev_ftl_set_property", 00:05:43.113 "bdev_ftl_get_properties", 00:05:43.113 "bdev_ftl_get_stats", 00:05:43.113 "bdev_ftl_unmap", 00:05:43.113 "bdev_ftl_unload", 00:05:43.113 "bdev_ftl_delete", 00:05:43.113 "bdev_ftl_load", 00:05:43.113 "bdev_ftl_create", 00:05:43.113 "bdev_virtio_attach_controller", 00:05:43.113 "bdev_virtio_scsi_get_devices", 00:05:43.113 "bdev_virtio_detach_controller", 00:05:43.113 "bdev_virtio_blk_set_hotplug", 00:05:43.113 "bdev_iscsi_delete", 00:05:43.113 "bdev_iscsi_create", 00:05:43.113 "bdev_iscsi_set_options", 00:05:43.113 "accel_error_inject_error", 00:05:43.113 "ioat_scan_accel_module", 00:05:43.113 "dsa_scan_accel_module", 00:05:43.113 "iaa_scan_accel_module", 00:05:43.113 "keyring_file_remove_key", 00:05:43.113 "keyring_file_add_key", 00:05:43.113 "iscsi_get_histogram", 00:05:43.113 "iscsi_enable_histogram", 00:05:43.113 "iscsi_set_options", 00:05:43.113 "iscsi_get_auth_groups", 00:05:43.113 "iscsi_auth_group_remove_secret", 00:05:43.113 "iscsi_auth_group_add_secret", 00:05:43.113 "iscsi_delete_auth_group", 00:05:43.113 "iscsi_create_auth_group", 00:05:43.113 "iscsi_set_discovery_auth", 00:05:43.113 "iscsi_get_options", 00:05:43.113 "iscsi_target_node_request_logout", 00:05:43.113 "iscsi_target_node_set_redirect", 00:05:43.113 "iscsi_target_node_set_auth", 00:05:43.113 "iscsi_target_node_add_lun", 00:05:43.113 "iscsi_get_stats", 00:05:43.113 "iscsi_get_connections", 00:05:43.113 "iscsi_portal_group_set_auth", 00:05:43.113 "iscsi_start_portal_group", 00:05:43.113 "iscsi_delete_portal_group", 00:05:43.113 "iscsi_create_portal_group", 00:05:43.113 "iscsi_get_portal_groups", 00:05:43.113 "iscsi_delete_target_node", 00:05:43.113 "iscsi_target_node_remove_pg_ig_maps", 00:05:43.113 "iscsi_target_node_add_pg_ig_maps", 00:05:43.113 "iscsi_create_target_node", 00:05:43.113 "iscsi_get_target_nodes", 00:05:43.113 "iscsi_delete_initiator_group", 00:05:43.113 "iscsi_initiator_group_remove_initiators", 00:05:43.113 "iscsi_initiator_group_add_initiators", 00:05:43.113 "iscsi_create_initiator_group", 00:05:43.113 "iscsi_get_initiator_groups", 00:05:43.113 "nvmf_set_crdt", 00:05:43.113 "nvmf_set_config", 00:05:43.113 "nvmf_set_max_subsystems", 00:05:43.113 "nvmf_subsystem_get_listeners", 00:05:43.113 "nvmf_subsystem_get_qpairs", 00:05:43.113 "nvmf_subsystem_get_controllers", 00:05:43.113 "nvmf_get_stats", 00:05:43.113 "nvmf_get_transports", 00:05:43.113 "nvmf_create_transport", 00:05:43.113 "nvmf_get_targets", 00:05:43.113 "nvmf_delete_target", 00:05:43.113 "nvmf_create_target", 00:05:43.113 "nvmf_subsystem_allow_any_host", 00:05:43.113 "nvmf_subsystem_remove_host", 00:05:43.113 "nvmf_subsystem_add_host", 00:05:43.113 "nvmf_ns_remove_host", 00:05:43.113 "nvmf_ns_add_host", 00:05:43.113 "nvmf_subsystem_remove_ns", 00:05:43.113 "nvmf_subsystem_add_ns", 00:05:43.113 "nvmf_subsystem_listener_set_ana_state", 00:05:43.113 "nvmf_discovery_get_referrals", 00:05:43.113 "nvmf_discovery_remove_referral", 00:05:43.113 "nvmf_discovery_add_referral", 00:05:43.113 "nvmf_subsystem_remove_listener", 00:05:43.113 "nvmf_subsystem_add_listener", 00:05:43.113 "nvmf_delete_subsystem", 00:05:43.113 "nvmf_create_subsystem", 00:05:43.113 "nvmf_get_subsystems", 00:05:43.113 "env_dpdk_get_mem_stats", 00:05:43.113 "nbd_get_disks", 00:05:43.113 "nbd_stop_disk", 00:05:43.113 "nbd_start_disk", 00:05:43.113 "ublk_recover_disk", 00:05:43.113 "ublk_get_disks", 00:05:43.113 "ublk_stop_disk", 00:05:43.113 "ublk_start_disk", 00:05:43.113 "ublk_destroy_target", 00:05:43.113 "ublk_create_target", 00:05:43.113 "virtio_blk_create_transport", 00:05:43.113 "virtio_blk_get_transports", 00:05:43.113 "vhost_controller_set_coalescing", 00:05:43.113 "vhost_get_controllers", 00:05:43.113 "vhost_delete_controller", 00:05:43.113 "vhost_create_blk_controller", 00:05:43.113 "vhost_scsi_controller_remove_target", 00:05:43.113 "vhost_scsi_controller_add_target", 00:05:43.113 "vhost_start_scsi_controller", 00:05:43.113 "vhost_create_scsi_controller", 00:05:43.113 "thread_set_cpumask", 00:05:43.113 "framework_get_scheduler", 00:05:43.113 "framework_set_scheduler", 00:05:43.113 "framework_get_reactors", 00:05:43.113 "thread_get_io_channels", 00:05:43.113 "thread_get_pollers", 00:05:43.113 "thread_get_stats", 00:05:43.113 "framework_monitor_context_switch", 00:05:43.113 "spdk_kill_instance", 00:05:43.113 "log_enable_timestamps", 00:05:43.113 "log_get_flags", 00:05:43.113 "log_clear_flag", 00:05:43.113 "log_set_flag", 00:05:43.113 "log_get_level", 00:05:43.113 "log_set_level", 00:05:43.113 "log_get_print_level", 00:05:43.113 "log_set_print_level", 00:05:43.113 "framework_enable_cpumask_locks", 00:05:43.113 "framework_disable_cpumask_locks", 00:05:43.113 "framework_wait_init", 00:05:43.113 "framework_start_init", 00:05:43.113 "scsi_get_devices", 00:05:43.113 "bdev_get_histogram", 00:05:43.113 "bdev_enable_histogram", 00:05:43.113 "bdev_set_qos_limit", 00:05:43.113 "bdev_set_qd_sampling_period", 00:05:43.113 "bdev_get_bdevs", 00:05:43.113 "bdev_reset_iostat", 00:05:43.113 "bdev_get_iostat", 00:05:43.113 "bdev_examine", 00:05:43.113 "bdev_wait_for_examine", 00:05:43.113 "bdev_set_options", 00:05:43.113 "notify_get_notifications", 00:05:43.113 "notify_get_types", 00:05:43.113 "accel_get_stats", 00:05:43.113 "accel_set_options", 00:05:43.113 "accel_set_driver", 00:05:43.113 "accel_crypto_key_destroy", 00:05:43.113 "accel_crypto_keys_get", 00:05:43.113 "accel_crypto_key_create", 00:05:43.113 "accel_assign_opc", 00:05:43.113 "accel_get_module_info", 00:05:43.113 "accel_get_opc_assignments", 00:05:43.113 "vmd_rescan", 00:05:43.113 "vmd_remove_device", 00:05:43.113 "vmd_enable", 00:05:43.113 "sock_get_default_impl", 00:05:43.113 "sock_set_default_impl", 00:05:43.113 "sock_impl_set_options", 00:05:43.113 "sock_impl_get_options", 00:05:43.113 "iobuf_get_stats", 00:05:43.113 "iobuf_set_options", 00:05:43.113 "framework_get_pci_devices", 00:05:43.113 "framework_get_config", 00:05:43.113 "framework_get_subsystems", 00:05:43.113 "trace_get_info", 00:05:43.113 "trace_get_tpoint_group_mask", 00:05:43.113 "trace_disable_tpoint_group", 00:05:43.113 "trace_enable_tpoint_group", 00:05:43.113 "trace_clear_tpoint_mask", 00:05:43.113 "trace_set_tpoint_mask", 00:05:43.113 "keyring_get_keys", 00:05:43.113 "spdk_get_version", 00:05:43.113 "rpc_get_methods" 00:05:43.113 ] 00:05:43.113 11:29:13 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:43.113 11:29:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.113 11:29:13 -- common/autotest_common.sh@10 -- # set +x 00:05:43.113 11:29:13 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:43.113 11:29:13 -- spdkcli/tcp.sh@38 -- # killprocess 2893705 00:05:43.113 11:29:13 -- common/autotest_common.sh@946 -- # '[' -z 2893705 ']' 00:05:43.113 11:29:13 -- common/autotest_common.sh@950 -- # kill -0 2893705 00:05:43.113 11:29:13 -- common/autotest_common.sh@951 -- # uname 00:05:43.113 11:29:13 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:43.113 11:29:13 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2893705 00:05:43.113 11:29:13 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:43.113 11:29:13 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:43.113 11:29:13 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2893705' 00:05:43.113 killing process with pid 2893705 00:05:43.113 11:29:13 -- common/autotest_common.sh@965 -- # kill 2893705 00:05:43.113 11:29:13 -- common/autotest_common.sh@970 -- # wait 2893705 00:05:43.682 00:05:43.682 real 0m1.599s 00:05:43.682 user 0m2.857s 00:05:43.682 sys 0m0.522s 00:05:43.682 11:29:14 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.682 11:29:14 -- common/autotest_common.sh@10 -- # set +x 00:05:43.682 ************************************ 00:05:43.682 END TEST spdkcli_tcp 00:05:43.682 ************************************ 00:05:43.682 11:29:14 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.682 11:29:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.682 11:29:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.682 11:29:14 -- common/autotest_common.sh@10 -- # set +x 00:05:43.682 ************************************ 00:05:43.682 START TEST dpdk_mem_utility 00:05:43.682 ************************************ 00:05:43.682 11:29:14 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.682 * Looking for test storage... 00:05:43.682 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:43.682 11:29:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:43.682 11:29:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2894004 00:05:43.682 11:29:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2894004 00:05:43.682 11:29:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.682 11:29:14 -- common/autotest_common.sh@827 -- # '[' -z 2894004 ']' 00:05:43.682 11:29:14 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.682 11:29:14 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:43.682 11:29:14 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.682 11:29:14 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:43.682 11:29:14 -- common/autotest_common.sh@10 -- # set +x 00:05:43.682 [2024-05-15 11:29:14.426089] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:05:43.682 [2024-05-15 11:29:14.426156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894004 ] 00:05:43.940 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.940 [2024-05-15 11:29:14.496314] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.940 [2024-05-15 11:29:14.587754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.508 11:29:15 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:44.508 11:29:15 -- common/autotest_common.sh@860 -- # return 0 00:05:44.508 11:29:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:44.508 11:29:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:44.508 11:29:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.508 11:29:15 -- common/autotest_common.sh@10 -- # set +x 00:05:44.508 { 00:05:44.508 "filename": "/tmp/spdk_mem_dump.txt" 00:05:44.508 } 00:05:44.508 11:29:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.508 11:29:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:44.768 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:44.768 1 heaps totaling size 814.000000 MiB 00:05:44.768 size: 814.000000 MiB heap id: 0 00:05:44.768 end heaps---------- 00:05:44.768 8 mempools totaling size 598.116089 MiB 00:05:44.768 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:44.768 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:44.768 size: 84.521057 MiB name: bdev_io_2894004 00:05:44.768 size: 51.011292 MiB name: evtpool_2894004 00:05:44.768 size: 50.003479 MiB name: msgpool_2894004 00:05:44.768 size: 21.763794 MiB name: PDU_Pool 00:05:44.768 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:44.768 size: 0.026123 MiB name: Session_Pool 00:05:44.768 end mempools------- 00:05:44.768 6 memzones totaling size 4.142822 MiB 00:05:44.768 size: 1.000366 MiB name: RG_ring_0_2894004 00:05:44.768 size: 1.000366 MiB name: RG_ring_1_2894004 00:05:44.768 size: 1.000366 MiB name: RG_ring_4_2894004 00:05:44.768 size: 1.000366 MiB name: RG_ring_5_2894004 00:05:44.768 size: 0.125366 MiB name: RG_ring_2_2894004 00:05:44.768 size: 0.015991 MiB name: RG_ring_3_2894004 00:05:44.768 end memzones------- 00:05:44.768 11:29:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:44.768 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:44.768 list of free elements. size: 12.519348 MiB 00:05:44.768 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:44.768 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:44.768 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:44.768 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:44.768 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:44.768 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:44.768 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:44.768 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:44.768 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:44.768 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:44.768 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:44.768 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:44.768 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:44.768 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:44.768 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:44.768 list of standard malloc elements. size: 199.218079 MiB 00:05:44.768 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:44.768 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:44.768 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:44.768 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:44.768 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:44.768 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:44.768 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:44.768 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:44.768 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:44.768 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:44.768 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:44.768 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:44.768 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:44.768 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:44.768 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:44.768 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:44.768 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:44.768 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:44.768 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:44.768 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:44.768 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:44.768 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:44.768 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:44.768 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:44.768 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:44.768 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:44.768 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:44.768 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:44.768 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:44.768 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:44.768 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:44.768 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:44.768 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:44.768 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:44.768 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:44.768 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:44.768 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:44.768 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:44.768 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:44.768 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:44.768 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:44.768 list of memzone associated elements. size: 602.262573 MiB 00:05:44.768 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:44.768 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:44.768 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:44.768 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:44.768 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:44.768 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2894004_0 00:05:44.768 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:44.768 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2894004_0 00:05:44.768 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:44.768 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2894004_0 00:05:44.768 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:44.768 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:44.768 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:44.768 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:44.768 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:44.768 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2894004 00:05:44.768 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:44.768 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2894004 00:05:44.768 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:44.768 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2894004 00:05:44.768 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:44.768 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:44.768 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:44.768 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:44.768 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:44.768 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:44.768 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:44.768 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:44.768 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:44.768 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2894004 00:05:44.768 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:44.768 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2894004 00:05:44.769 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:44.769 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2894004 00:05:44.769 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:44.769 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2894004 00:05:44.769 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:44.769 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2894004 00:05:44.769 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:44.769 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:44.769 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:44.769 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:44.769 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:44.769 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:44.769 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:44.769 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2894004 00:05:44.769 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:44.769 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:44.769 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:44.769 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:44.769 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:44.769 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2894004 00:05:44.769 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:44.769 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:44.769 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:44.769 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2894004 00:05:44.769 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:44.769 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2894004 00:05:44.769 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:44.769 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:44.769 11:29:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:44.769 11:29:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2894004 00:05:44.769 11:29:15 -- common/autotest_common.sh@946 -- # '[' -z 2894004 ']' 00:05:44.769 11:29:15 -- common/autotest_common.sh@950 -- # kill -0 2894004 00:05:44.769 11:29:15 -- common/autotest_common.sh@951 -- # uname 00:05:44.769 11:29:15 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:44.769 11:29:15 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2894004 00:05:44.769 11:29:15 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:44.769 11:29:15 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:44.769 11:29:15 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2894004' 00:05:44.769 killing process with pid 2894004 00:05:44.769 11:29:15 -- common/autotest_common.sh@965 -- # kill 2894004 00:05:44.769 11:29:15 -- common/autotest_common.sh@970 -- # wait 2894004 00:05:45.028 00:05:45.028 real 0m1.496s 00:05:45.028 user 0m1.488s 00:05:45.028 sys 0m0.490s 00:05:45.028 11:29:15 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.028 11:29:15 -- common/autotest_common.sh@10 -- # set +x 00:05:45.028 ************************************ 00:05:45.028 END TEST dpdk_mem_utility 00:05:45.028 ************************************ 00:05:45.287 11:29:15 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:45.287 11:29:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.287 11:29:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.287 11:29:15 -- common/autotest_common.sh@10 -- # set +x 00:05:45.287 ************************************ 00:05:45.287 START TEST event 00:05:45.287 ************************************ 00:05:45.287 11:29:15 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:45.287 * Looking for test storage... 00:05:45.287 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:45.287 11:29:15 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:45.287 11:29:15 -- bdev/nbd_common.sh@6 -- # set -e 00:05:45.287 11:29:15 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:45.287 11:29:15 -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:45.287 11:29:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.287 11:29:15 -- common/autotest_common.sh@10 -- # set +x 00:05:45.287 ************************************ 00:05:45.287 START TEST event_perf 00:05:45.287 ************************************ 00:05:45.287 11:29:15 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:45.287 Running I/O for 1 seconds...[2024-05-15 11:29:16.020014] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:05:45.287 [2024-05-15 11:29:16.020113] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894246 ] 00:05:45.546 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.546 [2024-05-15 11:29:16.094848] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.546 [2024-05-15 11:29:16.180686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.546 [2024-05-15 11:29:16.180773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.546 [2024-05-15 11:29:16.180853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.546 [2024-05-15 11:29:16.180854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.923 Running I/O for 1 seconds... 00:05:46.923 lcore 0: 207056 00:05:46.923 lcore 1: 207054 00:05:46.923 lcore 2: 207055 00:05:46.923 lcore 3: 207055 00:05:46.923 done. 00:05:46.923 00:05:46.923 real 0m1.282s 00:05:46.923 user 0m4.182s 00:05:46.923 sys 0m0.095s 00:05:46.923 11:29:17 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.923 11:29:17 -- common/autotest_common.sh@10 -- # set +x 00:05:46.923 ************************************ 00:05:46.923 END TEST event_perf 00:05:46.923 ************************************ 00:05:46.923 11:29:17 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:46.923 11:29:17 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:46.923 11:29:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.923 11:29:17 -- common/autotest_common.sh@10 -- # set +x 00:05:46.923 ************************************ 00:05:46.923 START TEST event_reactor 00:05:46.923 ************************************ 00:05:46.923 11:29:17 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:46.923 [2024-05-15 11:29:17.391297] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:05:46.923 [2024-05-15 11:29:17.391372] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894456 ] 00:05:46.923 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.923 [2024-05-15 11:29:17.465372] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.923 [2024-05-15 11:29:17.550516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.301 test_start 00:05:48.301 oneshot 00:05:48.301 tick 100 00:05:48.301 tick 100 00:05:48.301 tick 250 00:05:48.301 tick 100 00:05:48.301 tick 100 00:05:48.301 tick 100 00:05:48.301 tick 250 00:05:48.301 tick 500 00:05:48.301 tick 100 00:05:48.301 tick 100 00:05:48.301 tick 250 00:05:48.301 tick 100 00:05:48.301 tick 100 00:05:48.301 test_end 00:05:48.301 00:05:48.301 real 0m1.276s 00:05:48.301 user 0m1.169s 00:05:48.301 sys 0m0.102s 00:05:48.301 11:29:18 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.301 11:29:18 -- common/autotest_common.sh@10 -- # set +x 00:05:48.301 ************************************ 00:05:48.301 END TEST event_reactor 00:05:48.301 ************************************ 00:05:48.301 11:29:18 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:48.301 11:29:18 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:48.301 11:29:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.301 11:29:18 -- common/autotest_common.sh@10 -- # set +x 00:05:48.301 ************************************ 00:05:48.301 START TEST event_reactor_perf 00:05:48.301 ************************************ 00:05:48.301 11:29:18 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:48.301 [2024-05-15 11:29:18.733340] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:05:48.301 [2024-05-15 11:29:18.733384] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894656 ] 00:05:48.301 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.301 [2024-05-15 11:29:18.801823] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.301 [2024-05-15 11:29:18.887802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.239 test_start 00:05:49.239 test_end 00:05:49.239 Performance: 509487 events per second 00:05:49.239 00:05:49.239 real 0m1.261s 00:05:49.239 user 0m1.182s 00:05:49.239 sys 0m0.074s 00:05:49.239 11:29:19 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.239 11:29:19 -- common/autotest_common.sh@10 -- # set +x 00:05:49.239 ************************************ 00:05:49.239 END TEST event_reactor_perf 00:05:49.239 ************************************ 00:05:49.499 11:29:20 -- event/event.sh@49 -- # uname -s 00:05:49.499 11:29:20 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:49.499 11:29:20 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:49.499 11:29:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.499 11:29:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.499 11:29:20 -- common/autotest_common.sh@10 -- # set +x 00:05:49.499 ************************************ 00:05:49.499 START TEST event_scheduler 00:05:49.500 ************************************ 00:05:49.500 11:29:20 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:49.500 * Looking for test storage... 00:05:49.500 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:49.500 11:29:20 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:49.500 11:29:20 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2894883 00:05:49.500 11:29:20 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:49.500 11:29:20 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.500 11:29:20 -- scheduler/scheduler.sh@37 -- # waitforlisten 2894883 00:05:49.500 11:29:20 -- common/autotest_common.sh@827 -- # '[' -z 2894883 ']' 00:05:49.500 11:29:20 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.500 11:29:20 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:49.500 11:29:20 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.500 11:29:20 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:49.500 11:29:20 -- common/autotest_common.sh@10 -- # set +x 00:05:49.500 [2024-05-15 11:29:20.223022] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:05:49.500 [2024-05-15 11:29:20.223085] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894883 ] 00:05:49.500 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.758 [2024-05-15 11:29:20.293589] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.758 [2024-05-15 11:29:20.387080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.758 [2024-05-15 11:29:20.387121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.758 [2024-05-15 11:29:20.387200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.758 [2024-05-15 11:29:20.387201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.326 11:29:21 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:50.326 11:29:21 -- common/autotest_common.sh@860 -- # return 0 00:05:50.326 11:29:21 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:50.326 11:29:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.326 11:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.326 POWER: Env isn't set yet! 00:05:50.326 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:50.326 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:50.326 POWER: Cannot set governor of lcore 0 to userspace 00:05:50.326 POWER: Attempting to initialise PSTAT power management... 00:05:50.326 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:50.326 POWER: Initialized successfully for lcore 0 power management 00:05:50.326 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:50.326 POWER: Initialized successfully for lcore 1 power management 00:05:50.326 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:50.326 POWER: Initialized successfully for lcore 2 power management 00:05:50.326 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:50.326 POWER: Initialized successfully for lcore 3 power management 00:05:50.326 11:29:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.326 11:29:21 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:50.327 11:29:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.327 11:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.586 [2024-05-15 11:29:21.157068] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:50.586 11:29:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.586 11:29:21 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:50.586 11:29:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.586 11:29:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.586 11:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.586 ************************************ 00:05:50.586 START TEST scheduler_create_thread 00:05:50.586 ************************************ 00:05:50.586 11:29:21 -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:50.586 11:29:21 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:50.586 11:29:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.586 11:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.586 2 00:05:50.586 11:29:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.586 11:29:21 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:50.586 11:29:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.586 11:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.586 3 00:05:50.586 11:29:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.586 11:29:21 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:50.586 11:29:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.586 11:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.586 4 00:05:50.586 11:29:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.586 11:29:21 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:50.586 11:29:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.586 11:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.586 5 00:05:50.586 11:29:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.586 11:29:21 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:50.586 11:29:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.586 11:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.586 6 00:05:50.586 11:29:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.586 11:29:21 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:50.586 11:29:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.586 11:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.586 7 00:05:50.586 11:29:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.586 11:29:21 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:50.586 11:29:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.586 11:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.586 8 00:05:50.586 11:29:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.586 11:29:21 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:50.586 11:29:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.586 11:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.586 9 00:05:50.586 11:29:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.586 11:29:21 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:50.586 11:29:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.586 11:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.586 10 00:05:50.586 11:29:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.586 11:29:21 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:50.586 11:29:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.586 11:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.586 11:29:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.587 11:29:21 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:50.587 11:29:21 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:50.587 11:29:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.587 11:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:51.525 11:29:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.525 11:29:22 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:51.525 11:29:22 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.525 11:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.904 11:29:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.904 11:29:23 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:52.904 11:29:23 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:52.904 11:29:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.904 11:29:23 -- common/autotest_common.sh@10 -- # set +x 00:05:53.844 11:29:24 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.844 00:05:53.844 real 0m3.383s 00:05:53.844 user 0m0.024s 00:05:53.844 sys 0m0.007s 00:05:53.844 11:29:24 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.844 11:29:24 -- common/autotest_common.sh@10 -- # set +x 00:05:53.844 ************************************ 00:05:53.844 END TEST scheduler_create_thread 00:05:53.844 ************************************ 00:05:54.167 11:29:24 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:54.167 11:29:24 -- scheduler/scheduler.sh@46 -- # killprocess 2894883 00:05:54.167 11:29:24 -- common/autotest_common.sh@946 -- # '[' -z 2894883 ']' 00:05:54.167 11:29:24 -- common/autotest_common.sh@950 -- # kill -0 2894883 00:05:54.167 11:29:24 -- common/autotest_common.sh@951 -- # uname 00:05:54.167 11:29:24 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:54.167 11:29:24 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2894883 00:05:54.167 11:29:24 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:54.167 11:29:24 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:54.167 11:29:24 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2894883' 00:05:54.167 killing process with pid 2894883 00:05:54.167 11:29:24 -- common/autotest_common.sh@965 -- # kill 2894883 00:05:54.167 11:29:24 -- common/autotest_common.sh@970 -- # wait 2894883 00:05:54.426 [2024-05-15 11:29:24.965292] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:54.426 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:54.426 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:54.426 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:54.426 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:54.426 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:54.426 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:54.426 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:54.426 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:54.685 00:05:54.685 real 0m5.181s 00:05:54.685 user 0m10.540s 00:05:54.685 sys 0m0.453s 00:05:54.685 11:29:25 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.685 11:29:25 -- common/autotest_common.sh@10 -- # set +x 00:05:54.685 ************************************ 00:05:54.685 END TEST event_scheduler 00:05:54.685 ************************************ 00:05:54.685 11:29:25 -- event/event.sh@51 -- # modprobe -n nbd 00:05:54.685 11:29:25 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:54.685 11:29:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.685 11:29:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.685 11:29:25 -- common/autotest_common.sh@10 -- # set +x 00:05:54.685 ************************************ 00:05:54.685 START TEST app_repeat 00:05:54.685 ************************************ 00:05:54.685 11:29:25 -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:54.685 11:29:25 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.685 11:29:25 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.685 11:29:25 -- event/event.sh@13 -- # local nbd_list 00:05:54.685 11:29:25 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.685 11:29:25 -- event/event.sh@14 -- # local bdev_list 00:05:54.685 11:29:25 -- event/event.sh@15 -- # local repeat_times=4 00:05:54.685 11:29:25 -- event/event.sh@17 -- # modprobe nbd 00:05:54.685 11:29:25 -- event/event.sh@19 -- # repeat_pid=2895660 00:05:54.685 11:29:25 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.685 11:29:25 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:54.685 11:29:25 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2895660' 00:05:54.685 Process app_repeat pid: 2895660 00:05:54.685 11:29:25 -- event/event.sh@23 -- # for i in {0..2} 00:05:54.685 11:29:25 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:54.685 spdk_app_start Round 0 00:05:54.685 11:29:25 -- event/event.sh@25 -- # waitforlisten 2895660 /var/tmp/spdk-nbd.sock 00:05:54.685 11:29:25 -- common/autotest_common.sh@827 -- # '[' -z 2895660 ']' 00:05:54.685 11:29:25 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.685 11:29:25 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.685 11:29:25 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.685 11:29:25 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.685 11:29:25 -- common/autotest_common.sh@10 -- # set +x 00:05:54.685 [2024-05-15 11:29:25.387836] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:05:54.685 [2024-05-15 11:29:25.387901] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895660 ] 00:05:54.685 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.944 [2024-05-15 11:29:25.461555] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.944 [2024-05-15 11:29:25.552367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.944 [2024-05-15 11:29:25.552370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.512 11:29:26 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:55.512 11:29:26 -- common/autotest_common.sh@860 -- # return 0 00:05:55.512 11:29:26 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.771 Malloc0 00:05:55.771 11:29:26 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.031 Malloc1 00:05:56.031 11:29:26 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.031 11:29:26 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.031 11:29:26 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.031 11:29:26 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.031 11:29:26 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.031 11:29:26 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.031 11:29:26 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.031 11:29:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.031 11:29:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.031 11:29:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.031 11:29:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.031 11:29:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.031 11:29:26 -- bdev/nbd_common.sh@12 -- # local i 00:05:56.031 11:29:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.031 11:29:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.031 11:29:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.031 /dev/nbd0 00:05:56.031 11:29:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.291 11:29:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.291 11:29:26 -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:56.291 11:29:26 -- common/autotest_common.sh@865 -- # local i 00:05:56.291 11:29:26 -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:56.291 11:29:26 -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:56.291 11:29:26 -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:56.291 11:29:26 -- common/autotest_common.sh@869 -- # break 00:05:56.291 11:29:26 -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:56.291 11:29:26 -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:56.291 11:29:26 -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.291 1+0 records in 00:05:56.291 1+0 records out 00:05:56.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020013 s, 20.5 MB/s 00:05:56.291 11:29:26 -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:56.291 11:29:26 -- common/autotest_common.sh@882 -- # size=4096 00:05:56.291 11:29:26 -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:56.291 11:29:26 -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:56.291 11:29:26 -- common/autotest_common.sh@885 -- # return 0 00:05:56.291 11:29:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.291 11:29:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.291 11:29:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.291 /dev/nbd1 00:05:56.291 11:29:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.291 11:29:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.291 11:29:26 -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:56.291 11:29:26 -- common/autotest_common.sh@865 -- # local i 00:05:56.291 11:29:26 -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:56.291 11:29:26 -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:56.291 11:29:26 -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:56.291 11:29:27 -- common/autotest_common.sh@869 -- # break 00:05:56.291 11:29:27 -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:56.291 11:29:27 -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:56.291 11:29:27 -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.291 1+0 records in 00:05:56.291 1+0 records out 00:05:56.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226482 s, 18.1 MB/s 00:05:56.291 11:29:27 -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:56.291 11:29:27 -- common/autotest_common.sh@882 -- # size=4096 00:05:56.291 11:29:27 -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:56.291 11:29:27 -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:56.291 11:29:27 -- common/autotest_common.sh@885 -- # return 0 00:05:56.291 11:29:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.291 11:29:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.291 11:29:27 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.291 11:29:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.291 11:29:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.550 { 00:05:56.550 "nbd_device": "/dev/nbd0", 00:05:56.550 "bdev_name": "Malloc0" 00:05:56.550 }, 00:05:56.550 { 00:05:56.550 "nbd_device": "/dev/nbd1", 00:05:56.550 "bdev_name": "Malloc1" 00:05:56.550 } 00:05:56.550 ]' 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.550 { 00:05:56.550 "nbd_device": "/dev/nbd0", 00:05:56.550 "bdev_name": "Malloc0" 00:05:56.550 }, 00:05:56.550 { 00:05:56.550 "nbd_device": "/dev/nbd1", 00:05:56.550 "bdev_name": "Malloc1" 00:05:56.550 } 00:05:56.550 ]' 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.550 /dev/nbd1' 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.550 /dev/nbd1' 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.550 256+0 records in 00:05:56.550 256+0 records out 00:05:56.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109545 s, 95.7 MB/s 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.550 256+0 records in 00:05:56.550 256+0 records out 00:05:56.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202076 s, 51.9 MB/s 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.550 256+0 records in 00:05:56.550 256+0 records out 00:05:56.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214769 s, 48.8 MB/s 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.550 11:29:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@51 -- # local i 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@41 -- # break 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.809 11:29:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.068 11:29:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.068 11:29:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.068 11:29:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.068 11:29:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.068 11:29:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.069 11:29:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.069 11:29:27 -- bdev/nbd_common.sh@41 -- # break 00:05:57.069 11:29:27 -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.069 11:29:27 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.069 11:29:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.069 11:29:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.327 11:29:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.327 11:29:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.327 11:29:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.327 11:29:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.327 11:29:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.327 11:29:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.327 11:29:27 -- bdev/nbd_common.sh@65 -- # true 00:05:57.327 11:29:27 -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.327 11:29:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.327 11:29:27 -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.327 11:29:27 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.327 11:29:27 -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.327 11:29:27 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.586 11:29:28 -- event/event.sh@35 -- # sleep 3 00:05:57.586 [2024-05-15 11:29:28.346481] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.846 [2024-05-15 11:29:28.430299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.846 [2024-05-15 11:29:28.430300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.846 [2024-05-15 11:29:28.478011] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.846 [2024-05-15 11:29:28.478065] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.393 11:29:31 -- event/event.sh@23 -- # for i in {0..2} 00:06:00.393 11:29:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:00.393 spdk_app_start Round 1 00:06:00.393 11:29:31 -- event/event.sh@25 -- # waitforlisten 2895660 /var/tmp/spdk-nbd.sock 00:06:00.393 11:29:31 -- common/autotest_common.sh@827 -- # '[' -z 2895660 ']' 00:06:00.393 11:29:31 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.393 11:29:31 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:00.393 11:29:31 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.393 11:29:31 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:00.393 11:29:31 -- common/autotest_common.sh@10 -- # set +x 00:06:00.652 11:29:31 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:00.652 11:29:31 -- common/autotest_common.sh@860 -- # return 0 00:06:00.652 11:29:31 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.911 Malloc0 00:06:00.911 11:29:31 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.911 Malloc1 00:06:00.911 11:29:31 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.911 11:29:31 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.911 11:29:31 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.911 11:29:31 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.911 11:29:31 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.911 11:29:31 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.911 11:29:31 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.911 11:29:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.911 11:29:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.911 11:29:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.911 11:29:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.911 11:29:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.911 11:29:31 -- bdev/nbd_common.sh@12 -- # local i 00:06:00.911 11:29:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.911 11:29:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.911 11:29:31 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.170 /dev/nbd0 00:06:01.170 11:29:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.170 11:29:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.170 11:29:31 -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:01.170 11:29:31 -- common/autotest_common.sh@865 -- # local i 00:06:01.170 11:29:31 -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:01.170 11:29:31 -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:01.170 11:29:31 -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:01.170 11:29:31 -- common/autotest_common.sh@869 -- # break 00:06:01.170 11:29:31 -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:01.170 11:29:31 -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:01.170 11:29:31 -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.170 1+0 records in 00:06:01.170 1+0 records out 00:06:01.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206074 s, 19.9 MB/s 00:06:01.170 11:29:31 -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.170 11:29:31 -- common/autotest_common.sh@882 -- # size=4096 00:06:01.170 11:29:31 -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.170 11:29:31 -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:01.170 11:29:31 -- common/autotest_common.sh@885 -- # return 0 00:06:01.170 11:29:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.170 11:29:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.171 11:29:31 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.430 /dev/nbd1 00:06:01.430 11:29:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.430 11:29:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.430 11:29:32 -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:01.430 11:29:32 -- common/autotest_common.sh@865 -- # local i 00:06:01.430 11:29:32 -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:01.430 11:29:32 -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:01.430 11:29:32 -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:01.430 11:29:32 -- common/autotest_common.sh@869 -- # break 00:06:01.430 11:29:32 -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:01.430 11:29:32 -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:01.430 11:29:32 -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.430 1+0 records in 00:06:01.430 1+0 records out 00:06:01.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220069 s, 18.6 MB/s 00:06:01.430 11:29:32 -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.430 11:29:32 -- common/autotest_common.sh@882 -- # size=4096 00:06:01.430 11:29:32 -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.430 11:29:32 -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:01.430 11:29:32 -- common/autotest_common.sh@885 -- # return 0 00:06:01.430 11:29:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.430 11:29:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.430 11:29:32 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.430 11:29:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.430 11:29:32 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.689 { 00:06:01.689 "nbd_device": "/dev/nbd0", 00:06:01.689 "bdev_name": "Malloc0" 00:06:01.689 }, 00:06:01.689 { 00:06:01.689 "nbd_device": "/dev/nbd1", 00:06:01.689 "bdev_name": "Malloc1" 00:06:01.689 } 00:06:01.689 ]' 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.689 { 00:06:01.689 "nbd_device": "/dev/nbd0", 00:06:01.689 "bdev_name": "Malloc0" 00:06:01.689 }, 00:06:01.689 { 00:06:01.689 "nbd_device": "/dev/nbd1", 00:06:01.689 "bdev_name": "Malloc1" 00:06:01.689 } 00:06:01.689 ]' 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.689 /dev/nbd1' 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.689 /dev/nbd1' 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.689 256+0 records in 00:06:01.689 256+0 records out 00:06:01.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104913 s, 99.9 MB/s 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.689 256+0 records in 00:06:01.689 256+0 records out 00:06:01.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195633 s, 53.6 MB/s 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.689 256+0 records in 00:06:01.689 256+0 records out 00:06:01.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213306 s, 49.2 MB/s 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@51 -- # local i 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.689 11:29:32 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.948 11:29:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.948 11:29:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.948 11:29:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.948 11:29:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.948 11:29:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.949 11:29:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.949 11:29:32 -- bdev/nbd_common.sh@41 -- # break 00:06:01.949 11:29:32 -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.949 11:29:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.949 11:29:32 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.207 11:29:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.207 11:29:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.207 11:29:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.207 11:29:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.207 11:29:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.207 11:29:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.207 11:29:32 -- bdev/nbd_common.sh@41 -- # break 00:06:02.207 11:29:32 -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.207 11:29:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.207 11:29:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.207 11:29:32 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.207 11:29:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.467 11:29:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.467 11:29:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.467 11:29:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.467 11:29:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.467 11:29:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.467 11:29:33 -- bdev/nbd_common.sh@65 -- # true 00:06:02.467 11:29:33 -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.467 11:29:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.467 11:29:33 -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.467 11:29:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.467 11:29:33 -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.467 11:29:33 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.467 11:29:33 -- event/event.sh@35 -- # sleep 3 00:06:02.725 [2024-05-15 11:29:33.438580] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.984 [2024-05-15 11:29:33.522078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.984 [2024-05-15 11:29:33.522100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.984 [2024-05-15 11:29:33.571160] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.984 [2024-05-15 11:29:33.571213] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.518 11:29:36 -- event/event.sh@23 -- # for i in {0..2} 00:06:05.518 11:29:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:05.518 spdk_app_start Round 2 00:06:05.518 11:29:36 -- event/event.sh@25 -- # waitforlisten 2895660 /var/tmp/spdk-nbd.sock 00:06:05.518 11:29:36 -- common/autotest_common.sh@827 -- # '[' -z 2895660 ']' 00:06:05.518 11:29:36 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.518 11:29:36 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.518 11:29:36 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.518 11:29:36 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.518 11:29:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.776 11:29:36 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.776 11:29:36 -- common/autotest_common.sh@860 -- # return 0 00:06:05.776 11:29:36 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.036 Malloc0 00:06:06.036 11:29:36 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.036 Malloc1 00:06:06.036 11:29:36 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.036 11:29:36 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.036 11:29:36 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.036 11:29:36 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.036 11:29:36 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.036 11:29:36 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.036 11:29:36 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.036 11:29:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.036 11:29:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.036 11:29:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.036 11:29:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.036 11:29:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.036 11:29:36 -- bdev/nbd_common.sh@12 -- # local i 00:06:06.036 11:29:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.036 11:29:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.036 11:29:36 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.295 /dev/nbd0 00:06:06.295 11:29:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.295 11:29:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.295 11:29:36 -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:06.295 11:29:36 -- common/autotest_common.sh@865 -- # local i 00:06:06.295 11:29:36 -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:06.295 11:29:36 -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:06.295 11:29:36 -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:06.295 11:29:36 -- common/autotest_common.sh@869 -- # break 00:06:06.295 11:29:36 -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:06.295 11:29:36 -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:06.295 11:29:36 -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.295 1+0 records in 00:06:06.295 1+0 records out 00:06:06.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253064 s, 16.2 MB/s 00:06:06.295 11:29:36 -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:06.295 11:29:36 -- common/autotest_common.sh@882 -- # size=4096 00:06:06.295 11:29:36 -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:06.295 11:29:36 -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:06.295 11:29:36 -- common/autotest_common.sh@885 -- # return 0 00:06:06.295 11:29:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.295 11:29:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.295 11:29:36 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.554 /dev/nbd1 00:06:06.554 11:29:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.554 11:29:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.554 11:29:37 -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:06.554 11:29:37 -- common/autotest_common.sh@865 -- # local i 00:06:06.554 11:29:37 -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:06.554 11:29:37 -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:06.554 11:29:37 -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:06.554 11:29:37 -- common/autotest_common.sh@869 -- # break 00:06:06.554 11:29:37 -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:06.554 11:29:37 -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:06.554 11:29:37 -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.554 1+0 records in 00:06:06.554 1+0 records out 00:06:06.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211821 s, 19.3 MB/s 00:06:06.554 11:29:37 -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:06.554 11:29:37 -- common/autotest_common.sh@882 -- # size=4096 00:06:06.554 11:29:37 -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:06.554 11:29:37 -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:06.554 11:29:37 -- common/autotest_common.sh@885 -- # return 0 00:06:06.554 11:29:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.554 11:29:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.554 11:29:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.554 11:29:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.554 11:29:37 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.813 { 00:06:06.813 "nbd_device": "/dev/nbd0", 00:06:06.813 "bdev_name": "Malloc0" 00:06:06.813 }, 00:06:06.813 { 00:06:06.813 "nbd_device": "/dev/nbd1", 00:06:06.813 "bdev_name": "Malloc1" 00:06:06.813 } 00:06:06.813 ]' 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.813 { 00:06:06.813 "nbd_device": "/dev/nbd0", 00:06:06.813 "bdev_name": "Malloc0" 00:06:06.813 }, 00:06:06.813 { 00:06:06.813 "nbd_device": "/dev/nbd1", 00:06:06.813 "bdev_name": "Malloc1" 00:06:06.813 } 00:06:06.813 ]' 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.813 /dev/nbd1' 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.813 /dev/nbd1' 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.813 256+0 records in 00:06:06.813 256+0 records out 00:06:06.813 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103056 s, 102 MB/s 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.813 256+0 records in 00:06:06.813 256+0 records out 00:06:06.813 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201125 s, 52.1 MB/s 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.813 256+0 records in 00:06:06.813 256+0 records out 00:06:06.813 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021656 s, 48.4 MB/s 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.813 11:29:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.814 11:29:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.814 11:29:37 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.814 11:29:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.814 11:29:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.814 11:29:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.814 11:29:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.814 11:29:37 -- bdev/nbd_common.sh@51 -- # local i 00:06:06.814 11:29:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.814 11:29:37 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.073 11:29:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.073 11:29:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.073 11:29:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.073 11:29:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.073 11:29:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.073 11:29:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.073 11:29:37 -- bdev/nbd_common.sh@41 -- # break 00:06:07.073 11:29:37 -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.073 11:29:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.073 11:29:37 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.332 11:29:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.332 11:29:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.332 11:29:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.332 11:29:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.332 11:29:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.332 11:29:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.332 11:29:37 -- bdev/nbd_common.sh@41 -- # break 00:06:07.332 11:29:37 -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.332 11:29:37 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.332 11:29:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.332 11:29:37 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.332 11:29:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.591 11:29:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.591 11:29:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.591 11:29:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.591 11:29:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.591 11:29:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.591 11:29:38 -- bdev/nbd_common.sh@65 -- # true 00:06:07.591 11:29:38 -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.591 11:29:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.591 11:29:38 -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.591 11:29:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.591 11:29:38 -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.591 11:29:38 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.591 11:29:38 -- event/event.sh@35 -- # sleep 3 00:06:07.850 [2024-05-15 11:29:38.567154] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.110 [2024-05-15 11:29:38.651361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.110 [2024-05-15 11:29:38.651362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.110 [2024-05-15 11:29:38.699671] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.110 [2024-05-15 11:29:38.699722] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.645 11:29:41 -- event/event.sh@38 -- # waitforlisten 2895660 /var/tmp/spdk-nbd.sock 00:06:10.645 11:29:41 -- common/autotest_common.sh@827 -- # '[' -z 2895660 ']' 00:06:10.645 11:29:41 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.645 11:29:41 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.645 11:29:41 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.645 11:29:41 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.645 11:29:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.903 11:29:41 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.903 11:29:41 -- common/autotest_common.sh@860 -- # return 0 00:06:10.903 11:29:41 -- event/event.sh@39 -- # killprocess 2895660 00:06:10.903 11:29:41 -- common/autotest_common.sh@946 -- # '[' -z 2895660 ']' 00:06:10.903 11:29:41 -- common/autotest_common.sh@950 -- # kill -0 2895660 00:06:10.903 11:29:41 -- common/autotest_common.sh@951 -- # uname 00:06:10.904 11:29:41 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:10.904 11:29:41 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2895660 00:06:10.904 11:29:41 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:10.904 11:29:41 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:10.904 11:29:41 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2895660' 00:06:10.904 killing process with pid 2895660 00:06:10.904 11:29:41 -- common/autotest_common.sh@965 -- # kill 2895660 00:06:10.904 11:29:41 -- common/autotest_common.sh@970 -- # wait 2895660 00:06:11.163 spdk_app_start is called in Round 0. 00:06:11.163 Shutdown signal received, stop current app iteration 00:06:11.163 Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 reinitialization... 00:06:11.163 spdk_app_start is called in Round 1. 00:06:11.163 Shutdown signal received, stop current app iteration 00:06:11.163 Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 reinitialization... 00:06:11.163 spdk_app_start is called in Round 2. 00:06:11.163 Shutdown signal received, stop current app iteration 00:06:11.163 Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 reinitialization... 00:06:11.163 spdk_app_start is called in Round 3. 00:06:11.163 Shutdown signal received, stop current app iteration 00:06:11.163 11:29:41 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:11.163 11:29:41 -- event/event.sh@42 -- # return 0 00:06:11.163 00:06:11.163 real 0m16.434s 00:06:11.163 user 0m34.662s 00:06:11.163 sys 0m3.145s 00:06:11.163 11:29:41 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.163 11:29:41 -- common/autotest_common.sh@10 -- # set +x 00:06:11.163 ************************************ 00:06:11.163 END TEST app_repeat 00:06:11.163 ************************************ 00:06:11.163 11:29:41 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:11.163 11:29:41 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:11.163 11:29:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.163 11:29:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.163 11:29:41 -- common/autotest_common.sh@10 -- # set +x 00:06:11.163 ************************************ 00:06:11.163 START TEST cpu_locks 00:06:11.163 ************************************ 00:06:11.163 11:29:41 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:11.422 * Looking for test storage... 00:06:11.422 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:11.422 11:29:41 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:11.422 11:29:41 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:11.422 11:29:41 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:11.422 11:29:41 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:11.422 11:29:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.422 11:29:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.422 11:29:41 -- common/autotest_common.sh@10 -- # set +x 00:06:11.422 ************************************ 00:06:11.422 START TEST default_locks 00:06:11.422 ************************************ 00:06:11.422 11:29:42 -- common/autotest_common.sh@1121 -- # default_locks 00:06:11.422 11:29:42 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2898078 00:06:11.422 11:29:42 -- event/cpu_locks.sh@47 -- # waitforlisten 2898078 00:06:11.422 11:29:42 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.422 11:29:42 -- common/autotest_common.sh@827 -- # '[' -z 2898078 ']' 00:06:11.422 11:29:42 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.422 11:29:42 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:11.422 11:29:42 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.422 11:29:42 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:11.422 11:29:42 -- common/autotest_common.sh@10 -- # set +x 00:06:11.422 [2024-05-15 11:29:42.074252] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:11.422 [2024-05-15 11:29:42.074306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898078 ] 00:06:11.422 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.422 [2024-05-15 11:29:42.142998] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.681 [2024-05-15 11:29:42.231436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.249 11:29:42 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:12.249 11:29:42 -- common/autotest_common.sh@860 -- # return 0 00:06:12.249 11:29:42 -- event/cpu_locks.sh@49 -- # locks_exist 2898078 00:06:12.249 11:29:42 -- event/cpu_locks.sh@22 -- # lslocks -p 2898078 00:06:12.249 11:29:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.822 lslocks: write error 00:06:12.822 11:29:43 -- event/cpu_locks.sh@50 -- # killprocess 2898078 00:06:12.822 11:29:43 -- common/autotest_common.sh@946 -- # '[' -z 2898078 ']' 00:06:12.822 11:29:43 -- common/autotest_common.sh@950 -- # kill -0 2898078 00:06:12.822 11:29:43 -- common/autotest_common.sh@951 -- # uname 00:06:12.822 11:29:43 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:12.822 11:29:43 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2898078 00:06:12.822 11:29:43 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:12.822 11:29:43 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:12.822 11:29:43 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2898078' 00:06:12.822 killing process with pid 2898078 00:06:12.822 11:29:43 -- common/autotest_common.sh@965 -- # kill 2898078 00:06:12.822 11:29:43 -- common/autotest_common.sh@970 -- # wait 2898078 00:06:13.389 11:29:43 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2898078 00:06:13.389 11:29:43 -- common/autotest_common.sh@648 -- # local es=0 00:06:13.389 11:29:43 -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2898078 00:06:13.389 11:29:43 -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:13.389 11:29:43 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.389 11:29:43 -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:13.389 11:29:43 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.389 11:29:43 -- common/autotest_common.sh@651 -- # waitforlisten 2898078 00:06:13.389 11:29:43 -- common/autotest_common.sh@827 -- # '[' -z 2898078 ']' 00:06:13.389 11:29:43 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.389 11:29:43 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.389 11:29:43 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.389 11:29:43 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.389 11:29:43 -- common/autotest_common.sh@10 -- # set +x 00:06:13.389 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2898078) - No such process 00:06:13.389 ERROR: process (pid: 2898078) is no longer running 00:06:13.389 11:29:43 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:13.389 11:29:43 -- common/autotest_common.sh@860 -- # return 1 00:06:13.389 11:29:43 -- common/autotest_common.sh@651 -- # es=1 00:06:13.389 11:29:43 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:13.389 11:29:43 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:13.389 11:29:43 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:13.389 11:29:43 -- event/cpu_locks.sh@54 -- # no_locks 00:06:13.389 11:29:43 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:13.389 11:29:43 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:13.389 11:29:43 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:13.389 00:06:13.389 real 0m1.930s 00:06:13.389 user 0m2.005s 00:06:13.389 sys 0m0.746s 00:06:13.389 11:29:43 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.389 11:29:43 -- common/autotest_common.sh@10 -- # set +x 00:06:13.389 ************************************ 00:06:13.389 END TEST default_locks 00:06:13.389 ************************************ 00:06:13.389 11:29:43 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:13.389 11:29:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:13.389 11:29:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.389 11:29:43 -- common/autotest_common.sh@10 -- # set +x 00:06:13.389 ************************************ 00:06:13.389 START TEST default_locks_via_rpc 00:06:13.389 ************************************ 00:06:13.389 11:29:44 -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:13.389 11:29:44 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2898383 00:06:13.389 11:29:44 -- event/cpu_locks.sh@63 -- # waitforlisten 2898383 00:06:13.389 11:29:44 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.389 11:29:44 -- common/autotest_common.sh@827 -- # '[' -z 2898383 ']' 00:06:13.389 11:29:44 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.389 11:29:44 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.389 11:29:44 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.389 11:29:44 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.389 11:29:44 -- common/autotest_common.sh@10 -- # set +x 00:06:13.389 [2024-05-15 11:29:44.094377] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:13.389 [2024-05-15 11:29:44.094427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898383 ] 00:06:13.389 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.648 [2024-05-15 11:29:44.167135] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.648 [2024-05-15 11:29:44.249516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.216 11:29:44 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:14.216 11:29:44 -- common/autotest_common.sh@860 -- # return 0 00:06:14.216 11:29:44 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:14.216 11:29:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.216 11:29:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.216 11:29:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.216 11:29:44 -- event/cpu_locks.sh@67 -- # no_locks 00:06:14.216 11:29:44 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.216 11:29:44 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.216 11:29:44 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.216 11:29:44 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.216 11:29:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.216 11:29:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.216 11:29:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.216 11:29:44 -- event/cpu_locks.sh@71 -- # locks_exist 2898383 00:06:14.216 11:29:44 -- event/cpu_locks.sh@22 -- # lslocks -p 2898383 00:06:14.216 11:29:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.786 11:29:45 -- event/cpu_locks.sh@73 -- # killprocess 2898383 00:06:14.786 11:29:45 -- common/autotest_common.sh@946 -- # '[' -z 2898383 ']' 00:06:14.786 11:29:45 -- common/autotest_common.sh@950 -- # kill -0 2898383 00:06:14.786 11:29:45 -- common/autotest_common.sh@951 -- # uname 00:06:14.786 11:29:45 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:14.786 11:29:45 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2898383 00:06:14.786 11:29:45 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:14.786 11:29:45 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:14.786 11:29:45 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2898383' 00:06:14.786 killing process with pid 2898383 00:06:14.786 11:29:45 -- common/autotest_common.sh@965 -- # kill 2898383 00:06:14.786 11:29:45 -- common/autotest_common.sh@970 -- # wait 2898383 00:06:15.045 00:06:15.045 real 0m1.619s 00:06:15.045 user 0m1.671s 00:06:15.045 sys 0m0.539s 00:06:15.045 11:29:45 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.045 11:29:45 -- common/autotest_common.sh@10 -- # set +x 00:06:15.045 ************************************ 00:06:15.045 END TEST default_locks_via_rpc 00:06:15.045 ************************************ 00:06:15.045 11:29:45 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:15.045 11:29:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:15.045 11:29:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.045 11:29:45 -- common/autotest_common.sh@10 -- # set +x 00:06:15.045 ************************************ 00:06:15.045 START TEST non_locking_app_on_locked_coremask 00:06:15.045 ************************************ 00:06:15.045 11:29:45 -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:15.045 11:29:45 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2898679 00:06:15.045 11:29:45 -- event/cpu_locks.sh@81 -- # waitforlisten 2898679 /var/tmp/spdk.sock 00:06:15.045 11:29:45 -- common/autotest_common.sh@827 -- # '[' -z 2898679 ']' 00:06:15.045 11:29:45 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.045 11:29:45 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.045 11:29:45 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.045 11:29:45 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.045 11:29:45 -- common/autotest_common.sh@10 -- # set +x 00:06:15.045 11:29:45 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.045 [2024-05-15 11:29:45.794978] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:15.045 [2024-05-15 11:29:45.795032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898679 ] 00:06:15.304 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.304 [2024-05-15 11:29:45.865708] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.304 [2024-05-15 11:29:45.957776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.871 11:29:46 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.871 11:29:46 -- common/autotest_common.sh@860 -- # return 0 00:06:15.871 11:29:46 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2898699 00:06:15.871 11:29:46 -- event/cpu_locks.sh@85 -- # waitforlisten 2898699 /var/tmp/spdk2.sock 00:06:15.871 11:29:46 -- common/autotest_common.sh@827 -- # '[' -z 2898699 ']' 00:06:15.871 11:29:46 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.871 11:29:46 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:15.871 11:29:46 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.871 11:29:46 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.871 11:29:46 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.871 11:29:46 -- common/autotest_common.sh@10 -- # set +x 00:06:15.871 [2024-05-15 11:29:46.626480] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:15.871 [2024-05-15 11:29:46.626541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898699 ] 00:06:16.131 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.131 [2024-05-15 11:29:46.722814] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.131 [2024-05-15 11:29:46.722846] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.390 [2024-05-15 11:29:46.908144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.958 11:29:47 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:16.958 11:29:47 -- common/autotest_common.sh@860 -- # return 0 00:06:16.958 11:29:47 -- event/cpu_locks.sh@87 -- # locks_exist 2898679 00:06:16.958 11:29:47 -- event/cpu_locks.sh@22 -- # lslocks -p 2898679 00:06:16.958 11:29:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.218 lslocks: write error 00:06:17.218 11:29:47 -- event/cpu_locks.sh@89 -- # killprocess 2898679 00:06:17.218 11:29:47 -- common/autotest_common.sh@946 -- # '[' -z 2898679 ']' 00:06:17.218 11:29:47 -- common/autotest_common.sh@950 -- # kill -0 2898679 00:06:17.218 11:29:47 -- common/autotest_common.sh@951 -- # uname 00:06:17.218 11:29:47 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:17.218 11:29:47 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2898679 00:06:17.476 11:29:48 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:17.476 11:29:48 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:17.476 11:29:48 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2898679' 00:06:17.476 killing process with pid 2898679 00:06:17.476 11:29:48 -- common/autotest_common.sh@965 -- # kill 2898679 00:06:17.476 11:29:48 -- common/autotest_common.sh@970 -- # wait 2898679 00:06:18.043 11:29:48 -- event/cpu_locks.sh@90 -- # killprocess 2898699 00:06:18.043 11:29:48 -- common/autotest_common.sh@946 -- # '[' -z 2898699 ']' 00:06:18.043 11:29:48 -- common/autotest_common.sh@950 -- # kill -0 2898699 00:06:18.043 11:29:48 -- common/autotest_common.sh@951 -- # uname 00:06:18.043 11:29:48 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:18.043 11:29:48 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2898699 00:06:18.300 11:29:48 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:18.300 11:29:48 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:18.300 11:29:48 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2898699' 00:06:18.300 killing process with pid 2898699 00:06:18.300 11:29:48 -- common/autotest_common.sh@965 -- # kill 2898699 00:06:18.300 11:29:48 -- common/autotest_common.sh@970 -- # wait 2898699 00:06:18.558 00:06:18.558 real 0m3.417s 00:06:18.558 user 0m3.593s 00:06:18.558 sys 0m1.026s 00:06:18.558 11:29:49 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.558 11:29:49 -- common/autotest_common.sh@10 -- # set +x 00:06:18.558 ************************************ 00:06:18.558 END TEST non_locking_app_on_locked_coremask 00:06:18.558 ************************************ 00:06:18.558 11:29:49 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:18.558 11:29:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.558 11:29:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.558 11:29:49 -- common/autotest_common.sh@10 -- # set +x 00:06:18.558 ************************************ 00:06:18.558 START TEST locking_app_on_unlocked_coremask 00:06:18.558 ************************************ 00:06:18.558 11:29:49 -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:18.558 11:29:49 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2899096 00:06:18.558 11:29:49 -- event/cpu_locks.sh@99 -- # waitforlisten 2899096 /var/tmp/spdk.sock 00:06:18.558 11:29:49 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:18.558 11:29:49 -- common/autotest_common.sh@827 -- # '[' -z 2899096 ']' 00:06:18.558 11:29:49 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.558 11:29:49 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.558 11:29:49 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.558 11:29:49 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.558 11:29:49 -- common/autotest_common.sh@10 -- # set +x 00:06:18.558 [2024-05-15 11:29:49.303390] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:18.558 [2024-05-15 11:29:49.303445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899096 ] 00:06:18.817 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.817 [2024-05-15 11:29:49.373787] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.817 [2024-05-15 11:29:49.373822] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.817 [2024-05-15 11:29:49.464985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.384 11:29:50 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.384 11:29:50 -- common/autotest_common.sh@860 -- # return 0 00:06:19.384 11:29:50 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.384 11:29:50 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2899279 00:06:19.384 11:29:50 -- event/cpu_locks.sh@103 -- # waitforlisten 2899279 /var/tmp/spdk2.sock 00:06:19.384 11:29:50 -- common/autotest_common.sh@827 -- # '[' -z 2899279 ']' 00:06:19.384 11:29:50 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.384 11:29:50 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.384 11:29:50 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.384 11:29:50 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.384 11:29:50 -- common/autotest_common.sh@10 -- # set +x 00:06:19.384 [2024-05-15 11:29:50.141833] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:19.384 [2024-05-15 11:29:50.141888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899279 ] 00:06:19.643 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.643 [2024-05-15 11:29:50.237738] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.901 [2024-05-15 11:29:50.421761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.466 11:29:50 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.466 11:29:50 -- common/autotest_common.sh@860 -- # return 0 00:06:20.466 11:29:50 -- event/cpu_locks.sh@105 -- # locks_exist 2899279 00:06:20.466 11:29:50 -- event/cpu_locks.sh@22 -- # lslocks -p 2899279 00:06:20.466 11:29:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.399 lslocks: write error 00:06:21.400 11:29:52 -- event/cpu_locks.sh@107 -- # killprocess 2899096 00:06:21.400 11:29:52 -- common/autotest_common.sh@946 -- # '[' -z 2899096 ']' 00:06:21.400 11:29:52 -- common/autotest_common.sh@950 -- # kill -0 2899096 00:06:21.400 11:29:52 -- common/autotest_common.sh@951 -- # uname 00:06:21.400 11:29:52 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:21.400 11:29:52 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2899096 00:06:21.400 11:29:52 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:21.400 11:29:52 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:21.400 11:29:52 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2899096' 00:06:21.400 killing process with pid 2899096 00:06:21.400 11:29:52 -- common/autotest_common.sh@965 -- # kill 2899096 00:06:21.400 11:29:52 -- common/autotest_common.sh@970 -- # wait 2899096 00:06:22.383 11:29:52 -- event/cpu_locks.sh@108 -- # killprocess 2899279 00:06:22.383 11:29:52 -- common/autotest_common.sh@946 -- # '[' -z 2899279 ']' 00:06:22.383 11:29:52 -- common/autotest_common.sh@950 -- # kill -0 2899279 00:06:22.383 11:29:52 -- common/autotest_common.sh@951 -- # uname 00:06:22.383 11:29:52 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:22.383 11:29:52 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2899279 00:06:22.383 11:29:52 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:22.383 11:29:52 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:22.383 11:29:52 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2899279' 00:06:22.383 killing process with pid 2899279 00:06:22.383 11:29:52 -- common/autotest_common.sh@965 -- # kill 2899279 00:06:22.383 11:29:52 -- common/autotest_common.sh@970 -- # wait 2899279 00:06:22.672 00:06:22.672 real 0m4.065s 00:06:22.672 user 0m4.297s 00:06:22.672 sys 0m1.371s 00:06:22.672 11:29:53 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:22.672 11:29:53 -- common/autotest_common.sh@10 -- # set +x 00:06:22.672 ************************************ 00:06:22.672 END TEST locking_app_on_unlocked_coremask 00:06:22.672 ************************************ 00:06:22.672 11:29:53 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:22.672 11:29:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:22.672 11:29:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:22.672 11:29:53 -- common/autotest_common.sh@10 -- # set +x 00:06:22.672 ************************************ 00:06:22.672 START TEST locking_app_on_locked_coremask 00:06:22.672 ************************************ 00:06:22.672 11:29:53 -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:22.672 11:29:53 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2899684 00:06:22.672 11:29:53 -- event/cpu_locks.sh@116 -- # waitforlisten 2899684 /var/tmp/spdk.sock 00:06:22.672 11:29:53 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.672 11:29:53 -- common/autotest_common.sh@827 -- # '[' -z 2899684 ']' 00:06:22.672 11:29:53 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.672 11:29:53 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:22.672 11:29:53 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.672 11:29:53 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:22.672 11:29:53 -- common/autotest_common.sh@10 -- # set +x 00:06:22.931 [2024-05-15 11:29:53.461372] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:22.931 [2024-05-15 11:29:53.461435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899684 ] 00:06:22.931 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.931 [2024-05-15 11:29:53.536515] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.931 [2024-05-15 11:29:53.630747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.866 11:29:54 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:23.866 11:29:54 -- common/autotest_common.sh@860 -- # return 0 00:06:23.866 11:29:54 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:23.866 11:29:54 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2899867 00:06:23.866 11:29:54 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2899867 /var/tmp/spdk2.sock 00:06:23.866 11:29:54 -- common/autotest_common.sh@648 -- # local es=0 00:06:23.866 11:29:54 -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2899867 /var/tmp/spdk2.sock 00:06:23.866 11:29:54 -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:23.866 11:29:54 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.866 11:29:54 -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:23.866 11:29:54 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.866 11:29:54 -- common/autotest_common.sh@651 -- # waitforlisten 2899867 /var/tmp/spdk2.sock 00:06:23.866 11:29:54 -- common/autotest_common.sh@827 -- # '[' -z 2899867 ']' 00:06:23.866 11:29:54 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.866 11:29:54 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.866 11:29:54 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.866 11:29:54 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.866 11:29:54 -- common/autotest_common.sh@10 -- # set +x 00:06:23.866 [2024-05-15 11:29:54.311680] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:23.866 [2024-05-15 11:29:54.311742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899867 ] 00:06:23.866 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.866 [2024-05-15 11:29:54.407054] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2899684 has claimed it. 00:06:23.866 [2024-05-15 11:29:54.407104] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:24.432 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2899867) - No such process 00:06:24.432 ERROR: process (pid: 2899867) is no longer running 00:06:24.432 11:29:54 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:24.432 11:29:54 -- common/autotest_common.sh@860 -- # return 1 00:06:24.432 11:29:54 -- common/autotest_common.sh@651 -- # es=1 00:06:24.432 11:29:54 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.432 11:29:54 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:24.432 11:29:54 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.432 11:29:54 -- event/cpu_locks.sh@122 -- # locks_exist 2899684 00:06:24.432 11:29:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.432 11:29:54 -- event/cpu_locks.sh@22 -- # lslocks -p 2899684 00:06:24.690 lslocks: write error 00:06:24.690 11:29:55 -- event/cpu_locks.sh@124 -- # killprocess 2899684 00:06:24.690 11:29:55 -- common/autotest_common.sh@946 -- # '[' -z 2899684 ']' 00:06:24.690 11:29:55 -- common/autotest_common.sh@950 -- # kill -0 2899684 00:06:24.690 11:29:55 -- common/autotest_common.sh@951 -- # uname 00:06:24.690 11:29:55 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:24.690 11:29:55 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2899684 00:06:24.690 11:29:55 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:24.690 11:29:55 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:24.690 11:29:55 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2899684' 00:06:24.690 killing process with pid 2899684 00:06:24.690 11:29:55 -- common/autotest_common.sh@965 -- # kill 2899684 00:06:24.690 11:29:55 -- common/autotest_common.sh@970 -- # wait 2899684 00:06:25.258 00:06:25.258 real 0m2.373s 00:06:25.258 user 0m2.574s 00:06:25.258 sys 0m0.708s 00:06:25.258 11:29:55 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.258 11:29:55 -- common/autotest_common.sh@10 -- # set +x 00:06:25.258 ************************************ 00:06:25.258 END TEST locking_app_on_locked_coremask 00:06:25.258 ************************************ 00:06:25.258 11:29:55 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:25.258 11:29:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.258 11:29:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.258 11:29:55 -- common/autotest_common.sh@10 -- # set +x 00:06:25.258 ************************************ 00:06:25.258 START TEST locking_overlapped_coremask 00:06:25.258 ************************************ 00:06:25.258 11:29:55 -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:25.258 11:29:55 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2900075 00:06:25.258 11:29:55 -- event/cpu_locks.sh@133 -- # waitforlisten 2900075 /var/tmp/spdk.sock 00:06:25.258 11:29:55 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:25.258 11:29:55 -- common/autotest_common.sh@827 -- # '[' -z 2900075 ']' 00:06:25.258 11:29:55 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.258 11:29:55 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.258 11:29:55 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.258 11:29:55 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.258 11:29:55 -- common/autotest_common.sh@10 -- # set +x 00:06:25.258 [2024-05-15 11:29:55.922901] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:25.258 [2024-05-15 11:29:55.922960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900075 ] 00:06:25.258 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.258 [2024-05-15 11:29:55.995989] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.517 [2024-05-15 11:29:56.087182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.517 [2024-05-15 11:29:56.087268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.517 [2024-05-15 11:29:56.087271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.083 11:29:56 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.083 11:29:56 -- common/autotest_common.sh@860 -- # return 0 00:06:26.083 11:29:56 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2900261 00:06:26.083 11:29:56 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2900261 /var/tmp/spdk2.sock 00:06:26.083 11:29:56 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:26.083 11:29:56 -- common/autotest_common.sh@648 -- # local es=0 00:06:26.083 11:29:56 -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2900261 /var/tmp/spdk2.sock 00:06:26.083 11:29:56 -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:26.083 11:29:56 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.083 11:29:56 -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:26.083 11:29:56 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.083 11:29:56 -- common/autotest_common.sh@651 -- # waitforlisten 2900261 /var/tmp/spdk2.sock 00:06:26.083 11:29:56 -- common/autotest_common.sh@827 -- # '[' -z 2900261 ']' 00:06:26.083 11:29:56 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.083 11:29:56 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.083 11:29:56 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.083 11:29:56 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.083 11:29:56 -- common/autotest_common.sh@10 -- # set +x 00:06:26.083 [2024-05-15 11:29:56.776031] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:26.083 [2024-05-15 11:29:56.776096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900261 ] 00:06:26.083 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.340 [2024-05-15 11:29:56.874521] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2900075 has claimed it. 00:06:26.341 [2024-05-15 11:29:56.874566] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:26.907 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2900261) - No such process 00:06:26.907 ERROR: process (pid: 2900261) is no longer running 00:06:26.907 11:29:57 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.907 11:29:57 -- common/autotest_common.sh@860 -- # return 1 00:06:26.907 11:29:57 -- common/autotest_common.sh@651 -- # es=1 00:06:26.907 11:29:57 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.907 11:29:57 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:26.907 11:29:57 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.907 11:29:57 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:26.907 11:29:57 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:26.907 11:29:57 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:26.907 11:29:57 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:26.907 11:29:57 -- event/cpu_locks.sh@141 -- # killprocess 2900075 00:06:26.907 11:29:57 -- common/autotest_common.sh@946 -- # '[' -z 2900075 ']' 00:06:26.907 11:29:57 -- common/autotest_common.sh@950 -- # kill -0 2900075 00:06:26.907 11:29:57 -- common/autotest_common.sh@951 -- # uname 00:06:26.907 11:29:57 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:26.907 11:29:57 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2900075 00:06:26.907 11:29:57 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:26.907 11:29:57 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:26.907 11:29:57 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2900075' 00:06:26.907 killing process with pid 2900075 00:06:26.907 11:29:57 -- common/autotest_common.sh@965 -- # kill 2900075 00:06:26.907 11:29:57 -- common/autotest_common.sh@970 -- # wait 2900075 00:06:27.166 00:06:27.166 real 0m1.949s 00:06:27.166 user 0m5.308s 00:06:27.166 sys 0m0.478s 00:06:27.166 11:29:57 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.166 11:29:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.166 ************************************ 00:06:27.166 END TEST locking_overlapped_coremask 00:06:27.166 ************************************ 00:06:27.166 11:29:57 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:27.166 11:29:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:27.166 11:29:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.166 11:29:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.166 ************************************ 00:06:27.166 START TEST locking_overlapped_coremask_via_rpc 00:06:27.166 ************************************ 00:06:27.166 11:29:57 -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:27.166 11:29:57 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2900469 00:06:27.166 11:29:57 -- event/cpu_locks.sh@149 -- # waitforlisten 2900469 /var/tmp/spdk.sock 00:06:27.166 11:29:57 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:27.166 11:29:57 -- common/autotest_common.sh@827 -- # '[' -z 2900469 ']' 00:06:27.166 11:29:57 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.166 11:29:57 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.166 11:29:57 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.166 11:29:57 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.166 11:29:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.425 [2024-05-15 11:29:57.961987] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:27.425 [2024-05-15 11:29:57.962044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900469 ] 00:06:27.425 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.425 [2024-05-15 11:29:58.032979] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.425 [2024-05-15 11:29:58.033014] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.425 [2024-05-15 11:29:58.123604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.425 [2024-05-15 11:29:58.123691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.425 [2024-05-15 11:29:58.123693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.360 11:29:58 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.360 11:29:58 -- common/autotest_common.sh@860 -- # return 0 00:06:28.360 11:29:58 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2900490 00:06:28.360 11:29:58 -- event/cpu_locks.sh@153 -- # waitforlisten 2900490 /var/tmp/spdk2.sock 00:06:28.360 11:29:58 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:28.360 11:29:58 -- common/autotest_common.sh@827 -- # '[' -z 2900490 ']' 00:06:28.360 11:29:58 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.360 11:29:58 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.360 11:29:58 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.360 11:29:58 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.360 11:29:58 -- common/autotest_common.sh@10 -- # set +x 00:06:28.360 [2024-05-15 11:29:58.812143] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:28.360 [2024-05-15 11:29:58.812216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900490 ] 00:06:28.360 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.360 [2024-05-15 11:29:58.914091] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.360 [2024-05-15 11:29:58.914123] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.360 [2024-05-15 11:29:59.084331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.360 [2024-05-15 11:29:59.088105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.360 [2024-05-15 11:29:59.088106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:28.926 11:29:59 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.926 11:29:59 -- common/autotest_common.sh@860 -- # return 0 00:06:28.926 11:29:59 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:28.926 11:29:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.926 11:29:59 -- common/autotest_common.sh@10 -- # set +x 00:06:28.926 11:29:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.926 11:29:59 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:28.926 11:29:59 -- common/autotest_common.sh@648 -- # local es=0 00:06:28.926 11:29:59 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:28.926 11:29:59 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:28.926 11:29:59 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.926 11:29:59 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:28.926 11:29:59 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.926 11:29:59 -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:28.926 11:29:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.926 11:29:59 -- common/autotest_common.sh@10 -- # set +x 00:06:28.926 [2024-05-15 11:29:59.643130] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2900469 has claimed it. 00:06:28.926 request: 00:06:28.926 { 00:06:28.926 "method": "framework_enable_cpumask_locks", 00:06:28.926 "req_id": 1 00:06:28.926 } 00:06:28.926 Got JSON-RPC error response 00:06:28.926 response: 00:06:28.926 { 00:06:28.926 "code": -32603, 00:06:28.926 "message": "Failed to claim CPU core: 2" 00:06:28.926 } 00:06:28.926 11:29:59 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:28.926 11:29:59 -- common/autotest_common.sh@651 -- # es=1 00:06:28.926 11:29:59 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:28.926 11:29:59 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:28.926 11:29:59 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:28.926 11:29:59 -- event/cpu_locks.sh@158 -- # waitforlisten 2900469 /var/tmp/spdk.sock 00:06:28.926 11:29:59 -- common/autotest_common.sh@827 -- # '[' -z 2900469 ']' 00:06:28.926 11:29:59 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.926 11:29:59 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.926 11:29:59 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.926 11:29:59 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.926 11:29:59 -- common/autotest_common.sh@10 -- # set +x 00:06:29.184 11:29:59 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.184 11:29:59 -- common/autotest_common.sh@860 -- # return 0 00:06:29.184 11:29:59 -- event/cpu_locks.sh@159 -- # waitforlisten 2900490 /var/tmp/spdk2.sock 00:06:29.184 11:29:59 -- common/autotest_common.sh@827 -- # '[' -z 2900490 ']' 00:06:29.184 11:29:59 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.184 11:29:59 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.184 11:29:59 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.184 11:29:59 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.184 11:29:59 -- common/autotest_common.sh@10 -- # set +x 00:06:29.442 11:30:00 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.442 11:30:00 -- common/autotest_common.sh@860 -- # return 0 00:06:29.442 11:30:00 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:29.442 11:30:00 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:29.442 11:30:00 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:29.442 11:30:00 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:29.442 00:06:29.442 real 0m2.115s 00:06:29.442 user 0m0.826s 00:06:29.442 sys 0m0.222s 00:06:29.442 11:30:00 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.442 11:30:00 -- common/autotest_common.sh@10 -- # set +x 00:06:29.442 ************************************ 00:06:29.442 END TEST locking_overlapped_coremask_via_rpc 00:06:29.442 ************************************ 00:06:29.442 11:30:00 -- event/cpu_locks.sh@174 -- # cleanup 00:06:29.442 11:30:00 -- event/cpu_locks.sh@15 -- # [[ -z 2900469 ]] 00:06:29.442 11:30:00 -- event/cpu_locks.sh@15 -- # killprocess 2900469 00:06:29.442 11:30:00 -- common/autotest_common.sh@946 -- # '[' -z 2900469 ']' 00:06:29.442 11:30:00 -- common/autotest_common.sh@950 -- # kill -0 2900469 00:06:29.442 11:30:00 -- common/autotest_common.sh@951 -- # uname 00:06:29.442 11:30:00 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.442 11:30:00 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2900469 00:06:29.442 11:30:00 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.442 11:30:00 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.442 11:30:00 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2900469' 00:06:29.442 killing process with pid 2900469 00:06:29.442 11:30:00 -- common/autotest_common.sh@965 -- # kill 2900469 00:06:29.442 11:30:00 -- common/autotest_common.sh@970 -- # wait 2900469 00:06:30.008 11:30:00 -- event/cpu_locks.sh@16 -- # [[ -z 2900490 ]] 00:06:30.008 11:30:00 -- event/cpu_locks.sh@16 -- # killprocess 2900490 00:06:30.008 11:30:00 -- common/autotest_common.sh@946 -- # '[' -z 2900490 ']' 00:06:30.008 11:30:00 -- common/autotest_common.sh@950 -- # kill -0 2900490 00:06:30.008 11:30:00 -- common/autotest_common.sh@951 -- # uname 00:06:30.008 11:30:00 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:30.008 11:30:00 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2900490 00:06:30.008 11:30:00 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:30.008 11:30:00 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:30.008 11:30:00 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2900490' 00:06:30.008 killing process with pid 2900490 00:06:30.008 11:30:00 -- common/autotest_common.sh@965 -- # kill 2900490 00:06:30.008 11:30:00 -- common/autotest_common.sh@970 -- # wait 2900490 00:06:30.268 11:30:00 -- event/cpu_locks.sh@18 -- # rm -f 00:06:30.268 11:30:00 -- event/cpu_locks.sh@1 -- # cleanup 00:06:30.268 11:30:00 -- event/cpu_locks.sh@15 -- # [[ -z 2900469 ]] 00:06:30.268 11:30:00 -- event/cpu_locks.sh@15 -- # killprocess 2900469 00:06:30.268 11:30:00 -- common/autotest_common.sh@946 -- # '[' -z 2900469 ']' 00:06:30.268 11:30:00 -- common/autotest_common.sh@950 -- # kill -0 2900469 00:06:30.268 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2900469) - No such process 00:06:30.268 11:30:00 -- common/autotest_common.sh@973 -- # echo 'Process with pid 2900469 is not found' 00:06:30.268 Process with pid 2900469 is not found 00:06:30.268 11:30:00 -- event/cpu_locks.sh@16 -- # [[ -z 2900490 ]] 00:06:30.268 11:30:00 -- event/cpu_locks.sh@16 -- # killprocess 2900490 00:06:30.268 11:30:00 -- common/autotest_common.sh@946 -- # '[' -z 2900490 ']' 00:06:30.268 11:30:00 -- common/autotest_common.sh@950 -- # kill -0 2900490 00:06:30.268 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2900490) - No such process 00:06:30.268 11:30:00 -- common/autotest_common.sh@973 -- # echo 'Process with pid 2900490 is not found' 00:06:30.268 Process with pid 2900490 is not found 00:06:30.268 11:30:00 -- event/cpu_locks.sh@18 -- # rm -f 00:06:30.268 00:06:30.268 real 0m19.027s 00:06:30.268 user 0m30.973s 00:06:30.268 sys 0m6.204s 00:06:30.268 11:30:00 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.268 11:30:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 ************************************ 00:06:30.268 END TEST cpu_locks 00:06:30.268 ************************************ 00:06:30.268 00:06:30.268 real 0m45.092s 00:06:30.268 user 1m22.951s 00:06:30.268 sys 0m10.480s 00:06:30.268 11:30:00 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.268 11:30:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 ************************************ 00:06:30.268 END TEST event 00:06:30.268 ************************************ 00:06:30.268 11:30:00 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:30.268 11:30:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.268 11:30:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.268 11:30:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.268 ************************************ 00:06:30.268 START TEST thread 00:06:30.268 ************************************ 00:06:30.268 11:30:01 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:30.526 * Looking for test storage... 00:06:30.526 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:30.526 11:30:01 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:30.526 11:30:01 -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:30.526 11:30:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.526 11:30:01 -- common/autotest_common.sh@10 -- # set +x 00:06:30.526 ************************************ 00:06:30.526 START TEST thread_poller_perf 00:06:30.526 ************************************ 00:06:30.526 11:30:01 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:30.526 [2024-05-15 11:30:01.184809] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:30.526 [2024-05-15 11:30:01.184897] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901021 ] 00:06:30.526 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.526 [2024-05-15 11:30:01.258047] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.784 [2024-05-15 11:30:01.342561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.784 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:31.719 ====================================== 00:06:31.719 busy:2305886568 (cyc) 00:06:31.719 total_run_count: 403000 00:06:31.719 tsc_hz: 2300000000 (cyc) 00:06:31.719 ====================================== 00:06:31.719 poller_cost: 5721 (cyc), 2487 (nsec) 00:06:31.719 00:06:31.719 real 0m1.283s 00:06:31.719 user 0m1.185s 00:06:31.719 sys 0m0.092s 00:06:31.719 11:30:02 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.719 11:30:02 -- common/autotest_common.sh@10 -- # set +x 00:06:31.719 ************************************ 00:06:31.719 END TEST thread_poller_perf 00:06:31.719 ************************************ 00:06:31.978 11:30:02 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:31.978 11:30:02 -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:31.978 11:30:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.978 11:30:02 -- common/autotest_common.sh@10 -- # set +x 00:06:31.978 ************************************ 00:06:31.978 START TEST thread_poller_perf 00:06:31.978 ************************************ 00:06:31.978 11:30:02 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:31.978 [2024-05-15 11:30:02.556557] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:31.978 [2024-05-15 11:30:02.556638] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901277 ] 00:06:31.978 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.978 [2024-05-15 11:30:02.629650] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.978 [2024-05-15 11:30:02.715272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.978 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:33.354 ====================================== 00:06:33.354 busy:2301655950 (cyc) 00:06:33.354 total_run_count: 5473000 00:06:33.354 tsc_hz: 2300000000 (cyc) 00:06:33.354 ====================================== 00:06:33.354 poller_cost: 420 (cyc), 182 (nsec) 00:06:33.354 00:06:33.354 real 0m1.281s 00:06:33.354 user 0m1.187s 00:06:33.354 sys 0m0.087s 00:06:33.354 11:30:03 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.354 11:30:03 -- common/autotest_common.sh@10 -- # set +x 00:06:33.354 ************************************ 00:06:33.354 END TEST thread_poller_perf 00:06:33.354 ************************************ 00:06:33.354 11:30:03 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:33.354 00:06:33.354 real 0m2.834s 00:06:33.354 user 0m2.478s 00:06:33.354 sys 0m0.358s 00:06:33.354 11:30:03 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.354 11:30:03 -- common/autotest_common.sh@10 -- # set +x 00:06:33.354 ************************************ 00:06:33.354 END TEST thread 00:06:33.354 ************************************ 00:06:33.354 11:30:03 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:33.354 11:30:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.354 11:30:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.354 11:30:03 -- common/autotest_common.sh@10 -- # set +x 00:06:33.354 ************************************ 00:06:33.354 START TEST accel 00:06:33.354 ************************************ 00:06:33.354 11:30:03 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:33.354 * Looking for test storage... 00:06:33.354 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:33.354 11:30:04 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:33.354 11:30:04 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:33.354 11:30:04 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:33.354 11:30:04 -- accel/accel.sh@62 -- # spdk_tgt_pid=2901593 00:06:33.354 11:30:04 -- accel/accel.sh@63 -- # waitforlisten 2901593 00:06:33.354 11:30:04 -- common/autotest_common.sh@827 -- # '[' -z 2901593 ']' 00:06:33.354 11:30:04 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.354 11:30:04 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.354 11:30:04 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:33.354 11:30:04 -- accel/accel.sh@61 -- # build_accel_config 00:06:33.354 11:30:04 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.354 11:30:04 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.354 11:30:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.354 11:30:04 -- common/autotest_common.sh@10 -- # set +x 00:06:33.354 11:30:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.354 11:30:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.354 11:30:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.354 11:30:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.354 11:30:04 -- accel/accel.sh@40 -- # local IFS=, 00:06:33.354 11:30:04 -- accel/accel.sh@41 -- # jq -r . 00:06:33.354 [2024-05-15 11:30:04.099338] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:33.354 [2024-05-15 11:30:04.099405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901593 ] 00:06:33.612 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.612 [2024-05-15 11:30:04.170277] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.612 [2024-05-15 11:30:04.254322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.208 11:30:04 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.208 11:30:04 -- common/autotest_common.sh@860 -- # return 0 00:06:34.208 11:30:04 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:34.208 11:30:04 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:34.208 11:30:04 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:34.208 11:30:04 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:34.208 11:30:04 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:34.208 11:30:04 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:34.208 11:30:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.208 11:30:04 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:34.208 11:30:04 -- common/autotest_common.sh@10 -- # set +x 00:06:34.208 11:30:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.208 11:30:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # IFS== 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:34.208 11:30:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.208 11:30:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # IFS== 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:34.208 11:30:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.208 11:30:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # IFS== 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:34.208 11:30:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.208 11:30:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # IFS== 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:34.208 11:30:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.208 11:30:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # IFS== 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:34.208 11:30:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.208 11:30:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # IFS== 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:34.208 11:30:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.208 11:30:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # IFS== 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:34.208 11:30:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.208 11:30:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # IFS== 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:34.208 11:30:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.208 11:30:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # IFS== 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:34.208 11:30:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.208 11:30:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # IFS== 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:34.208 11:30:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.208 11:30:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # IFS== 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:34.208 11:30:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.208 11:30:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # IFS== 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:34.208 11:30:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.208 11:30:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # IFS== 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:34.208 11:30:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.208 11:30:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # IFS== 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:34.208 11:30:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.208 11:30:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # IFS== 00:06:34.208 11:30:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:34.208 11:30:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.208 11:30:04 -- accel/accel.sh@75 -- # killprocess 2901593 00:06:34.208 11:30:04 -- common/autotest_common.sh@946 -- # '[' -z 2901593 ']' 00:06:34.208 11:30:04 -- common/autotest_common.sh@950 -- # kill -0 2901593 00:06:34.208 11:30:04 -- common/autotest_common.sh@951 -- # uname 00:06:34.208 11:30:04 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:34.208 11:30:04 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2901593 00:06:34.467 11:30:05 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:34.467 11:30:05 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:34.467 11:30:05 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2901593' 00:06:34.467 killing process with pid 2901593 00:06:34.467 11:30:05 -- common/autotest_common.sh@965 -- # kill 2901593 00:06:34.467 11:30:05 -- common/autotest_common.sh@970 -- # wait 2901593 00:06:34.726 11:30:05 -- accel/accel.sh@76 -- # trap - ERR 00:06:34.726 11:30:05 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:34.726 11:30:05 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:34.726 11:30:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.726 11:30:05 -- common/autotest_common.sh@10 -- # set +x 00:06:34.726 11:30:05 -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:34.726 11:30:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:34.726 11:30:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.726 11:30:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.726 11:30:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.726 11:30:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.726 11:30:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.726 11:30:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.726 11:30:05 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.726 11:30:05 -- accel/accel.sh@41 -- # jq -r . 00:06:34.726 11:30:05 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.726 11:30:05 -- common/autotest_common.sh@10 -- # set +x 00:06:34.726 11:30:05 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:34.726 11:30:05 -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:34.726 11:30:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.726 11:30:05 -- common/autotest_common.sh@10 -- # set +x 00:06:34.985 ************************************ 00:06:34.985 START TEST accel_missing_filename 00:06:34.985 ************************************ 00:06:34.985 11:30:05 -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:34.985 11:30:05 -- common/autotest_common.sh@648 -- # local es=0 00:06:34.985 11:30:05 -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:34.985 11:30:05 -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:34.985 11:30:05 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.985 11:30:05 -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:34.985 11:30:05 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.985 11:30:05 -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:34.985 11:30:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:34.985 11:30:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.985 11:30:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.985 11:30:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.985 11:30:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.985 11:30:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.985 11:30:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.985 11:30:05 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.985 11:30:05 -- accel/accel.sh@41 -- # jq -r . 00:06:34.985 [2024-05-15 11:30:05.565154] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:34.985 [2024-05-15 11:30:05.565219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902089 ] 00:06:34.985 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.985 [2024-05-15 11:30:05.640122] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.985 [2024-05-15 11:30:05.725244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.244 [2024-05-15 11:30:05.766236] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.244 [2024-05-15 11:30:05.824886] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:35.244 A filename is required. 00:06:35.244 11:30:05 -- common/autotest_common.sh@651 -- # es=234 00:06:35.244 11:30:05 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.244 11:30:05 -- common/autotest_common.sh@660 -- # es=106 00:06:35.244 11:30:05 -- common/autotest_common.sh@661 -- # case "$es" in 00:06:35.244 11:30:05 -- common/autotest_common.sh@668 -- # es=1 00:06:35.244 11:30:05 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.244 00:06:35.244 real 0m0.389s 00:06:35.244 user 0m0.286s 00:06:35.244 sys 0m0.141s 00:06:35.244 11:30:05 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.244 11:30:05 -- common/autotest_common.sh@10 -- # set +x 00:06:35.244 ************************************ 00:06:35.244 END TEST accel_missing_filename 00:06:35.244 ************************************ 00:06:35.244 11:30:05 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:35.244 11:30:05 -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:35.244 11:30:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.244 11:30:05 -- common/autotest_common.sh@10 -- # set +x 00:06:35.244 ************************************ 00:06:35.244 START TEST accel_compress_verify 00:06:35.244 ************************************ 00:06:35.244 11:30:06 -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:35.244 11:30:06 -- common/autotest_common.sh@648 -- # local es=0 00:06:35.244 11:30:06 -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:35.244 11:30:06 -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:35.244 11:30:06 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.244 11:30:06 -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:35.244 11:30:06 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.244 11:30:06 -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:35.503 11:30:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:35.503 11:30:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.503 11:30:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.503 11:30:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.503 11:30:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.503 11:30:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.503 11:30:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.503 11:30:06 -- accel/accel.sh@40 -- # local IFS=, 00:06:35.503 11:30:06 -- accel/accel.sh@41 -- # jq -r . 00:06:35.503 [2024-05-15 11:30:06.034427] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:35.503 [2024-05-15 11:30:06.034508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902300 ] 00:06:35.503 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.503 [2024-05-15 11:30:06.109101] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.503 [2024-05-15 11:30:06.193190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.503 [2024-05-15 11:30:06.238390] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.761 [2024-05-15 11:30:06.307750] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:35.761 00:06:35.761 Compression does not support the verify option, aborting. 00:06:35.761 11:30:06 -- common/autotest_common.sh@651 -- # es=161 00:06:35.761 11:30:06 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.761 11:30:06 -- common/autotest_common.sh@660 -- # es=33 00:06:35.761 11:30:06 -- common/autotest_common.sh@661 -- # case "$es" in 00:06:35.761 11:30:06 -- common/autotest_common.sh@668 -- # es=1 00:06:35.762 11:30:06 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.762 00:06:35.762 real 0m0.408s 00:06:35.762 user 0m0.301s 00:06:35.762 sys 0m0.143s 00:06:35.762 11:30:06 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.762 11:30:06 -- common/autotest_common.sh@10 -- # set +x 00:06:35.762 ************************************ 00:06:35.762 END TEST accel_compress_verify 00:06:35.762 ************************************ 00:06:35.762 11:30:06 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:35.762 11:30:06 -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:35.762 11:30:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.762 11:30:06 -- common/autotest_common.sh@10 -- # set +x 00:06:35.762 ************************************ 00:06:35.762 START TEST accel_wrong_workload 00:06:35.762 ************************************ 00:06:35.762 11:30:06 -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:35.762 11:30:06 -- common/autotest_common.sh@648 -- # local es=0 00:06:35.762 11:30:06 -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:35.762 11:30:06 -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:35.762 11:30:06 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.762 11:30:06 -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:35.762 11:30:06 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.762 11:30:06 -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:35.762 11:30:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:35.762 11:30:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.762 11:30:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.762 11:30:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.762 11:30:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.762 11:30:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.762 11:30:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.762 11:30:06 -- accel/accel.sh@40 -- # local IFS=, 00:06:35.762 11:30:06 -- accel/accel.sh@41 -- # jq -r . 00:06:36.020 Unsupported workload type: foobar 00:06:36.021 [2024-05-15 11:30:06.532290] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:36.021 accel_perf options: 00:06:36.021 [-h help message] 00:06:36.021 [-q queue depth per core] 00:06:36.021 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:36.021 [-T number of threads per core 00:06:36.021 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:36.021 [-t time in seconds] 00:06:36.021 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:36.021 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:36.021 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:36.021 [-l for compress/decompress workloads, name of uncompressed input file 00:06:36.021 [-S for crc32c workload, use this seed value (default 0) 00:06:36.021 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:36.021 [-f for fill workload, use this BYTE value (default 255) 00:06:36.021 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:36.021 [-y verify result if this switch is on] 00:06:36.021 [-a tasks to allocate per core (default: same value as -q)] 00:06:36.021 Can be used to spread operations across a wider range of memory. 00:06:36.021 11:30:06 -- common/autotest_common.sh@651 -- # es=1 00:06:36.021 11:30:06 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:36.021 11:30:06 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:36.021 11:30:06 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:36.021 00:06:36.021 real 0m0.036s 00:06:36.021 user 0m0.018s 00:06:36.021 sys 0m0.018s 00:06:36.021 11:30:06 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.021 11:30:06 -- common/autotest_common.sh@10 -- # set +x 00:06:36.021 ************************************ 00:06:36.021 END TEST accel_wrong_workload 00:06:36.021 ************************************ 00:06:36.021 Error: writing output failed: Broken pipe 00:06:36.021 11:30:06 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:36.021 11:30:06 -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:36.021 11:30:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.021 11:30:06 -- common/autotest_common.sh@10 -- # set +x 00:06:36.021 ************************************ 00:06:36.021 START TEST accel_negative_buffers 00:06:36.021 ************************************ 00:06:36.021 11:30:06 -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:36.021 11:30:06 -- common/autotest_common.sh@648 -- # local es=0 00:06:36.021 11:30:06 -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:36.021 11:30:06 -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:36.021 11:30:06 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.021 11:30:06 -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:36.021 11:30:06 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.021 11:30:06 -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:36.021 11:30:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:36.021 11:30:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.021 11:30:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.021 11:30:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.021 11:30:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.021 11:30:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.021 11:30:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.021 11:30:06 -- accel/accel.sh@40 -- # local IFS=, 00:06:36.021 11:30:06 -- accel/accel.sh@41 -- # jq -r . 00:06:36.021 -x option must be non-negative. 00:06:36.021 [2024-05-15 11:30:06.651249] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:36.021 accel_perf options: 00:06:36.021 [-h help message] 00:06:36.021 [-q queue depth per core] 00:06:36.021 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:36.021 [-T number of threads per core 00:06:36.021 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:36.021 [-t time in seconds] 00:06:36.021 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:36.021 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:36.021 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:36.021 [-l for compress/decompress workloads, name of uncompressed input file 00:06:36.021 [-S for crc32c workload, use this seed value (default 0) 00:06:36.021 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:36.021 [-f for fill workload, use this BYTE value (default 255) 00:06:36.021 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:36.021 [-y verify result if this switch is on] 00:06:36.021 [-a tasks to allocate per core (default: same value as -q)] 00:06:36.021 Can be used to spread operations across a wider range of memory. 00:06:36.021 11:30:06 -- common/autotest_common.sh@651 -- # es=1 00:06:36.021 11:30:06 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:36.021 11:30:06 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:36.021 11:30:06 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:36.021 00:06:36.021 real 0m0.038s 00:06:36.021 user 0m0.023s 00:06:36.021 sys 0m0.015s 00:06:36.021 11:30:06 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.021 11:30:06 -- common/autotest_common.sh@10 -- # set +x 00:06:36.021 ************************************ 00:06:36.021 END TEST accel_negative_buffers 00:06:36.021 ************************************ 00:06:36.021 Error: writing output failed: Broken pipe 00:06:36.021 11:30:06 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:36.021 11:30:06 -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:36.021 11:30:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.021 11:30:06 -- common/autotest_common.sh@10 -- # set +x 00:06:36.021 ************************************ 00:06:36.021 START TEST accel_crc32c 00:06:36.021 ************************************ 00:06:36.021 11:30:06 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:36.021 11:30:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.021 11:30:06 -- accel/accel.sh@17 -- # local accel_module 00:06:36.021 11:30:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.021 11:30:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.021 11:30:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:36.021 11:30:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:36.021 11:30:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.021 11:30:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.021 11:30:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.021 11:30:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.021 11:30:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.021 11:30:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.021 11:30:06 -- accel/accel.sh@40 -- # local IFS=, 00:06:36.021 11:30:06 -- accel/accel.sh@41 -- # jq -r . 00:06:36.021 [2024-05-15 11:30:06.774346] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:36.021 [2024-05-15 11:30:06.774406] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902370 ] 00:06:36.279 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.279 [2024-05-15 11:30:06.847949] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.279 [2024-05-15 11:30:06.940115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.279 11:30:06 -- accel/accel.sh@20 -- # val= 00:06:36.279 11:30:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.279 11:30:06 -- accel/accel.sh@20 -- # val= 00:06:36.279 11:30:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.279 11:30:06 -- accel/accel.sh@20 -- # val=0x1 00:06:36.279 11:30:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.279 11:30:06 -- accel/accel.sh@20 -- # val= 00:06:36.279 11:30:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.279 11:30:06 -- accel/accel.sh@20 -- # val= 00:06:36.279 11:30:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.279 11:30:06 -- accel/accel.sh@20 -- # val=crc32c 00:06:36.279 11:30:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.279 11:30:06 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.279 11:30:06 -- accel/accel.sh@20 -- # val=32 00:06:36.279 11:30:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.279 11:30:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.279 11:30:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.279 11:30:06 -- accel/accel.sh@20 -- # val= 00:06:36.279 11:30:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.279 11:30:06 -- accel/accel.sh@20 -- # val=software 00:06:36.279 11:30:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.279 11:30:06 -- accel/accel.sh@22 -- # accel_module=software 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.279 11:30:06 -- accel/accel.sh@20 -- # val=32 00:06:36.279 11:30:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.279 11:30:06 -- accel/accel.sh@20 -- # val=32 00:06:36.279 11:30:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.279 11:30:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.279 11:30:07 -- accel/accel.sh@20 -- # val=1 00:06:36.279 11:30:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.279 11:30:07 -- accel/accel.sh@19 -- # IFS=: 00:06:36.279 11:30:07 -- accel/accel.sh@19 -- # read -r var val 00:06:36.279 11:30:07 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.279 11:30:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.279 11:30:07 -- accel/accel.sh@19 -- # IFS=: 00:06:36.279 11:30:07 -- accel/accel.sh@19 -- # read -r var val 00:06:36.279 11:30:07 -- accel/accel.sh@20 -- # val=Yes 00:06:36.279 11:30:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.279 11:30:07 -- accel/accel.sh@19 -- # IFS=: 00:06:36.279 11:30:07 -- accel/accel.sh@19 -- # read -r var val 00:06:36.279 11:30:07 -- accel/accel.sh@20 -- # val= 00:06:36.279 11:30:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.279 11:30:07 -- accel/accel.sh@19 -- # IFS=: 00:06:36.279 11:30:07 -- accel/accel.sh@19 -- # read -r var val 00:06:36.279 11:30:07 -- accel/accel.sh@20 -- # val= 00:06:36.279 11:30:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.279 11:30:07 -- accel/accel.sh@19 -- # IFS=: 00:06:36.279 11:30:07 -- accel/accel.sh@19 -- # read -r var val 00:06:37.655 11:30:08 -- accel/accel.sh@20 -- # val= 00:06:37.655 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.655 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.655 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.655 11:30:08 -- accel/accel.sh@20 -- # val= 00:06:37.655 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.655 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.655 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.655 11:30:08 -- accel/accel.sh@20 -- # val= 00:06:37.655 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.655 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.655 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.655 11:30:08 -- accel/accel.sh@20 -- # val= 00:06:37.655 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.655 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.655 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.655 11:30:08 -- accel/accel.sh@20 -- # val= 00:06:37.655 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.655 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.655 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.655 11:30:08 -- accel/accel.sh@20 -- # val= 00:06:37.655 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.655 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.655 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.655 11:30:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.656 11:30:08 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:37.656 11:30:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.656 00:06:37.656 real 0m1.424s 00:06:37.656 user 0m1.285s 00:06:37.656 sys 0m0.154s 00:06:37.656 11:30:08 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.656 11:30:08 -- common/autotest_common.sh@10 -- # set +x 00:06:37.656 ************************************ 00:06:37.656 END TEST accel_crc32c 00:06:37.656 ************************************ 00:06:37.656 11:30:08 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:37.656 11:30:08 -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:37.656 11:30:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.656 11:30:08 -- common/autotest_common.sh@10 -- # set +x 00:06:37.656 ************************************ 00:06:37.656 START TEST accel_crc32c_C2 00:06:37.656 ************************************ 00:06:37.656 11:30:08 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:37.656 11:30:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.656 11:30:08 -- accel/accel.sh@17 -- # local accel_module 00:06:37.656 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.656 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.656 11:30:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:37.656 11:30:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:37.656 11:30:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.656 11:30:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.656 11:30:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.656 11:30:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.656 11:30:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.656 11:30:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.656 11:30:08 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.656 11:30:08 -- accel/accel.sh@41 -- # jq -r . 00:06:37.656 [2024-05-15 11:30:08.287973] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:37.656 [2024-05-15 11:30:08.288053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902571 ] 00:06:37.656 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.656 [2024-05-15 11:30:08.360724] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.914 [2024-05-15 11:30:08.453528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.914 11:30:08 -- accel/accel.sh@20 -- # val= 00:06:37.914 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.914 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.915 11:30:08 -- accel/accel.sh@20 -- # val= 00:06:37.915 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.915 11:30:08 -- accel/accel.sh@20 -- # val=0x1 00:06:37.915 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.915 11:30:08 -- accel/accel.sh@20 -- # val= 00:06:37.915 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.915 11:30:08 -- accel/accel.sh@20 -- # val= 00:06:37.915 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.915 11:30:08 -- accel/accel.sh@20 -- # val=crc32c 00:06:37.915 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.915 11:30:08 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.915 11:30:08 -- accel/accel.sh@20 -- # val=0 00:06:37.915 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.915 11:30:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.915 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.915 11:30:08 -- accel/accel.sh@20 -- # val= 00:06:37.915 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.915 11:30:08 -- accel/accel.sh@20 -- # val=software 00:06:37.915 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.915 11:30:08 -- accel/accel.sh@22 -- # accel_module=software 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.915 11:30:08 -- accel/accel.sh@20 -- # val=32 00:06:37.915 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.915 11:30:08 -- accel/accel.sh@20 -- # val=32 00:06:37.915 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.915 11:30:08 -- accel/accel.sh@20 -- # val=1 00:06:37.915 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.915 11:30:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.915 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.915 11:30:08 -- accel/accel.sh@20 -- # val=Yes 00:06:37.915 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.915 11:30:08 -- accel/accel.sh@20 -- # val= 00:06:37.915 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.915 11:30:08 -- accel/accel.sh@20 -- # val= 00:06:37.915 11:30:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.915 11:30:08 -- accel/accel.sh@19 -- # read -r var val 00:06:39.289 11:30:09 -- accel/accel.sh@20 -- # val= 00:06:39.289 11:30:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # IFS=: 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # read -r var val 00:06:39.289 11:30:09 -- accel/accel.sh@20 -- # val= 00:06:39.289 11:30:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # IFS=: 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # read -r var val 00:06:39.289 11:30:09 -- accel/accel.sh@20 -- # val= 00:06:39.289 11:30:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # IFS=: 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # read -r var val 00:06:39.289 11:30:09 -- accel/accel.sh@20 -- # val= 00:06:39.289 11:30:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # IFS=: 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # read -r var val 00:06:39.289 11:30:09 -- accel/accel.sh@20 -- # val= 00:06:39.289 11:30:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # IFS=: 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # read -r var val 00:06:39.289 11:30:09 -- accel/accel.sh@20 -- # val= 00:06:39.289 11:30:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # IFS=: 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # read -r var val 00:06:39.289 11:30:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.289 11:30:09 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:39.289 11:30:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.289 00:06:39.289 real 0m1.419s 00:06:39.289 user 0m1.293s 00:06:39.289 sys 0m0.139s 00:06:39.289 11:30:09 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.289 11:30:09 -- common/autotest_common.sh@10 -- # set +x 00:06:39.289 ************************************ 00:06:39.289 END TEST accel_crc32c_C2 00:06:39.289 ************************************ 00:06:39.289 11:30:09 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:39.289 11:30:09 -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:39.289 11:30:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.289 11:30:09 -- common/autotest_common.sh@10 -- # set +x 00:06:39.289 ************************************ 00:06:39.289 START TEST accel_copy 00:06:39.289 ************************************ 00:06:39.289 11:30:09 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:39.289 11:30:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.289 11:30:09 -- accel/accel.sh@17 -- # local accel_module 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # IFS=: 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # read -r var val 00:06:39.289 11:30:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:39.289 11:30:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:39.289 11:30:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.289 11:30:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.289 11:30:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.289 11:30:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.289 11:30:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.289 11:30:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.289 11:30:09 -- accel/accel.sh@40 -- # local IFS=, 00:06:39.289 11:30:09 -- accel/accel.sh@41 -- # jq -r . 00:06:39.289 [2024-05-15 11:30:09.793916] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:39.289 [2024-05-15 11:30:09.793976] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902805 ] 00:06:39.289 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.289 [2024-05-15 11:30:09.863471] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.289 [2024-05-15 11:30:09.947996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.289 11:30:09 -- accel/accel.sh@20 -- # val= 00:06:39.289 11:30:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # IFS=: 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # read -r var val 00:06:39.289 11:30:09 -- accel/accel.sh@20 -- # val= 00:06:39.289 11:30:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # IFS=: 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # read -r var val 00:06:39.289 11:30:09 -- accel/accel.sh@20 -- # val=0x1 00:06:39.289 11:30:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # IFS=: 00:06:39.289 11:30:09 -- accel/accel.sh@19 -- # read -r var val 00:06:39.289 11:30:09 -- accel/accel.sh@20 -- # val= 00:06:39.289 11:30:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.289 11:30:10 -- accel/accel.sh@19 -- # IFS=: 00:06:39.289 11:30:10 -- accel/accel.sh@19 -- # read -r var val 00:06:39.289 11:30:10 -- accel/accel.sh@20 -- # val= 00:06:39.289 11:30:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.289 11:30:10 -- accel/accel.sh@19 -- # IFS=: 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # read -r var val 00:06:39.290 11:30:10 -- accel/accel.sh@20 -- # val=copy 00:06:39.290 11:30:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.290 11:30:10 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # IFS=: 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # read -r var val 00:06:39.290 11:30:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.290 11:30:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # IFS=: 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # read -r var val 00:06:39.290 11:30:10 -- accel/accel.sh@20 -- # val= 00:06:39.290 11:30:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # IFS=: 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # read -r var val 00:06:39.290 11:30:10 -- accel/accel.sh@20 -- # val=software 00:06:39.290 11:30:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.290 11:30:10 -- accel/accel.sh@22 -- # accel_module=software 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # IFS=: 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # read -r var val 00:06:39.290 11:30:10 -- accel/accel.sh@20 -- # val=32 00:06:39.290 11:30:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # IFS=: 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # read -r var val 00:06:39.290 11:30:10 -- accel/accel.sh@20 -- # val=32 00:06:39.290 11:30:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # IFS=: 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # read -r var val 00:06:39.290 11:30:10 -- accel/accel.sh@20 -- # val=1 00:06:39.290 11:30:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # IFS=: 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # read -r var val 00:06:39.290 11:30:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.290 11:30:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # IFS=: 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # read -r var val 00:06:39.290 11:30:10 -- accel/accel.sh@20 -- # val=Yes 00:06:39.290 11:30:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # IFS=: 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # read -r var val 00:06:39.290 11:30:10 -- accel/accel.sh@20 -- # val= 00:06:39.290 11:30:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # IFS=: 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # read -r var val 00:06:39.290 11:30:10 -- accel/accel.sh@20 -- # val= 00:06:39.290 11:30:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # IFS=: 00:06:39.290 11:30:10 -- accel/accel.sh@19 -- # read -r var val 00:06:40.668 11:30:11 -- accel/accel.sh@20 -- # val= 00:06:40.668 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.668 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.668 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.668 11:30:11 -- accel/accel.sh@20 -- # val= 00:06:40.668 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.668 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.668 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.668 11:30:11 -- accel/accel.sh@20 -- # val= 00:06:40.668 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.668 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.668 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.668 11:30:11 -- accel/accel.sh@20 -- # val= 00:06:40.668 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.668 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.668 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.668 11:30:11 -- accel/accel.sh@20 -- # val= 00:06:40.668 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.668 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.668 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.668 11:30:11 -- accel/accel.sh@20 -- # val= 00:06:40.668 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.668 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.668 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.668 11:30:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.668 11:30:11 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:40.668 11:30:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.668 00:06:40.668 real 0m1.398s 00:06:40.668 user 0m1.268s 00:06:40.668 sys 0m0.144s 00:06:40.668 11:30:11 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.668 11:30:11 -- common/autotest_common.sh@10 -- # set +x 00:06:40.668 ************************************ 00:06:40.668 END TEST accel_copy 00:06:40.668 ************************************ 00:06:40.668 11:30:11 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.668 11:30:11 -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:40.668 11:30:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.668 11:30:11 -- common/autotest_common.sh@10 -- # set +x 00:06:40.668 ************************************ 00:06:40.668 START TEST accel_fill 00:06:40.668 ************************************ 00:06:40.668 11:30:11 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.668 11:30:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.668 11:30:11 -- accel/accel.sh@17 -- # local accel_module 00:06:40.668 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.668 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.668 11:30:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.668 11:30:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.668 11:30:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.668 11:30:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.668 11:30:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.668 11:30:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.668 11:30:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.668 11:30:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.668 11:30:11 -- accel/accel.sh@40 -- # local IFS=, 00:06:40.668 11:30:11 -- accel/accel.sh@41 -- # jq -r . 00:06:40.668 [2024-05-15 11:30:11.285699] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:40.668 [2024-05-15 11:30:11.285761] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903061 ] 00:06:40.668 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.668 [2024-05-15 11:30:11.356445] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.927 [2024-05-15 11:30:11.448203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.927 11:30:11 -- accel/accel.sh@20 -- # val= 00:06:40.927 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.927 11:30:11 -- accel/accel.sh@20 -- # val= 00:06:40.927 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.927 11:30:11 -- accel/accel.sh@20 -- # val=0x1 00:06:40.927 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.927 11:30:11 -- accel/accel.sh@20 -- # val= 00:06:40.927 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.927 11:30:11 -- accel/accel.sh@20 -- # val= 00:06:40.927 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.927 11:30:11 -- accel/accel.sh@20 -- # val=fill 00:06:40.927 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.927 11:30:11 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.927 11:30:11 -- accel/accel.sh@20 -- # val=0x80 00:06:40.927 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.927 11:30:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.927 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.927 11:30:11 -- accel/accel.sh@20 -- # val= 00:06:40.927 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.927 11:30:11 -- accel/accel.sh@20 -- # val=software 00:06:40.927 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.927 11:30:11 -- accel/accel.sh@22 -- # accel_module=software 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.927 11:30:11 -- accel/accel.sh@20 -- # val=64 00:06:40.927 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.927 11:30:11 -- accel/accel.sh@20 -- # val=64 00:06:40.927 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.927 11:30:11 -- accel/accel.sh@20 -- # val=1 00:06:40.927 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.927 11:30:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.927 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.927 11:30:11 -- accel/accel.sh@20 -- # val=Yes 00:06:40.927 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.927 11:30:11 -- accel/accel.sh@20 -- # val= 00:06:40.927 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:40.927 11:30:11 -- accel/accel.sh@20 -- # val= 00:06:40.927 11:30:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # IFS=: 00:06:40.927 11:30:11 -- accel/accel.sh@19 -- # read -r var val 00:06:42.306 11:30:12 -- accel/accel.sh@20 -- # val= 00:06:42.306 11:30:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.306 11:30:12 -- accel/accel.sh@19 -- # IFS=: 00:06:42.306 11:30:12 -- accel/accel.sh@19 -- # read -r var val 00:06:42.306 11:30:12 -- accel/accel.sh@20 -- # val= 00:06:42.306 11:30:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.306 11:30:12 -- accel/accel.sh@19 -- # IFS=: 00:06:42.306 11:30:12 -- accel/accel.sh@19 -- # read -r var val 00:06:42.306 11:30:12 -- accel/accel.sh@20 -- # val= 00:06:42.306 11:30:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.306 11:30:12 -- accel/accel.sh@19 -- # IFS=: 00:06:42.306 11:30:12 -- accel/accel.sh@19 -- # read -r var val 00:06:42.306 11:30:12 -- accel/accel.sh@20 -- # val= 00:06:42.306 11:30:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.306 11:30:12 -- accel/accel.sh@19 -- # IFS=: 00:06:42.306 11:30:12 -- accel/accel.sh@19 -- # read -r var val 00:06:42.306 11:30:12 -- accel/accel.sh@20 -- # val= 00:06:42.306 11:30:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.306 11:30:12 -- accel/accel.sh@19 -- # IFS=: 00:06:42.306 11:30:12 -- accel/accel.sh@19 -- # read -r var val 00:06:42.306 11:30:12 -- accel/accel.sh@20 -- # val= 00:06:42.306 11:30:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.306 11:30:12 -- accel/accel.sh@19 -- # IFS=: 00:06:42.306 11:30:12 -- accel/accel.sh@19 -- # read -r var val 00:06:42.306 11:30:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.306 11:30:12 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:42.306 11:30:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.306 00:06:42.306 real 0m1.422s 00:06:42.306 user 0m1.293s 00:06:42.306 sys 0m0.142s 00:06:42.306 11:30:12 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.306 11:30:12 -- common/autotest_common.sh@10 -- # set +x 00:06:42.306 ************************************ 00:06:42.306 END TEST accel_fill 00:06:42.306 ************************************ 00:06:42.306 11:30:12 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:42.306 11:30:12 -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:42.306 11:30:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.306 11:30:12 -- common/autotest_common.sh@10 -- # set +x 00:06:42.306 ************************************ 00:06:42.306 START TEST accel_copy_crc32c 00:06:42.306 ************************************ 00:06:42.306 11:30:12 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:42.306 11:30:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.306 11:30:12 -- accel/accel.sh@17 -- # local accel_module 00:06:42.306 11:30:12 -- accel/accel.sh@19 -- # IFS=: 00:06:42.306 11:30:12 -- accel/accel.sh@19 -- # read -r var val 00:06:42.306 11:30:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:42.306 11:30:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:42.306 11:30:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.306 11:30:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.306 11:30:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.306 11:30:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.306 11:30:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.306 11:30:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.306 11:30:12 -- accel/accel.sh@40 -- # local IFS=, 00:06:42.306 11:30:12 -- accel/accel.sh@41 -- # jq -r . 00:06:42.306 [2024-05-15 11:30:12.797239] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:42.306 [2024-05-15 11:30:12.797300] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903321 ] 00:06:42.306 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.306 [2024-05-15 11:30:12.868750] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.306 [2024-05-15 11:30:12.954233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.306 11:30:13 -- accel/accel.sh@20 -- # val= 00:06:42.306 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.306 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.306 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:42.306 11:30:13 -- accel/accel.sh@20 -- # val= 00:06:42.306 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.306 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.306 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:42.306 11:30:13 -- accel/accel.sh@20 -- # val=0x1 00:06:42.306 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.306 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.306 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:42.306 11:30:13 -- accel/accel.sh@20 -- # val= 00:06:42.306 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.306 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.306 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:42.306 11:30:13 -- accel/accel.sh@20 -- # val= 00:06:42.306 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.306 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.306 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:42.306 11:30:13 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:42.306 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.306 11:30:13 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:42.306 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.306 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:42.306 11:30:13 -- accel/accel.sh@20 -- # val=0 00:06:42.307 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:42.307 11:30:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.307 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:42.307 11:30:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.307 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:42.307 11:30:13 -- accel/accel.sh@20 -- # val= 00:06:42.307 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:42.307 11:30:13 -- accel/accel.sh@20 -- # val=software 00:06:42.307 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.307 11:30:13 -- accel/accel.sh@22 -- # accel_module=software 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:42.307 11:30:13 -- accel/accel.sh@20 -- # val=32 00:06:42.307 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:42.307 11:30:13 -- accel/accel.sh@20 -- # val=32 00:06:42.307 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:42.307 11:30:13 -- accel/accel.sh@20 -- # val=1 00:06:42.307 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:42.307 11:30:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.307 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:42.307 11:30:13 -- accel/accel.sh@20 -- # val=Yes 00:06:42.307 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:42.307 11:30:13 -- accel/accel.sh@20 -- # val= 00:06:42.307 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:42.307 11:30:13 -- accel/accel.sh@20 -- # val= 00:06:42.307 11:30:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # IFS=: 00:06:42.307 11:30:13 -- accel/accel.sh@19 -- # read -r var val 00:06:43.681 11:30:14 -- accel/accel.sh@20 -- # val= 00:06:43.681 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.681 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.681 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.681 11:30:14 -- accel/accel.sh@20 -- # val= 00:06:43.681 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.681 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.681 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.681 11:30:14 -- accel/accel.sh@20 -- # val= 00:06:43.681 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.681 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.681 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.681 11:30:14 -- accel/accel.sh@20 -- # val= 00:06:43.681 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.681 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.681 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.681 11:30:14 -- accel/accel.sh@20 -- # val= 00:06:43.681 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.681 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.681 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.681 11:30:14 -- accel/accel.sh@20 -- # val= 00:06:43.681 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.681 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.681 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.682 11:30:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.682 11:30:14 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:43.682 11:30:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.682 00:06:43.682 real 0m1.416s 00:06:43.682 user 0m1.284s 00:06:43.682 sys 0m0.146s 00:06:43.682 11:30:14 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.682 11:30:14 -- common/autotest_common.sh@10 -- # set +x 00:06:43.682 ************************************ 00:06:43.682 END TEST accel_copy_crc32c 00:06:43.682 ************************************ 00:06:43.682 11:30:14 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:43.682 11:30:14 -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:43.682 11:30:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.682 11:30:14 -- common/autotest_common.sh@10 -- # set +x 00:06:43.682 ************************************ 00:06:43.682 START TEST accel_copy_crc32c_C2 00:06:43.682 ************************************ 00:06:43.682 11:30:14 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:43.682 11:30:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.682 11:30:14 -- accel/accel.sh@17 -- # local accel_module 00:06:43.682 11:30:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:43.682 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.682 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.682 11:30:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:43.682 11:30:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.682 11:30:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.682 11:30:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.682 11:30:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.682 11:30:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.682 11:30:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.682 11:30:14 -- accel/accel.sh@40 -- # local IFS=, 00:06:43.682 11:30:14 -- accel/accel.sh@41 -- # jq -r . 00:06:43.682 [2024-05-15 11:30:14.273638] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:43.682 [2024-05-15 11:30:14.273684] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903548 ] 00:06:43.682 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.682 [2024-05-15 11:30:14.341895] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.682 [2024-05-15 11:30:14.426955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val= 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val= 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val=0x1 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val= 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val= 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val=0 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val= 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val=software 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@22 -- # accel_module=software 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val=32 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val=32 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val=1 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val=Yes 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val= 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:43.942 11:30:14 -- accel/accel.sh@20 -- # val= 00:06:43.942 11:30:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # IFS=: 00:06:43.942 11:30:14 -- accel/accel.sh@19 -- # read -r var val 00:06:45.320 11:30:15 -- accel/accel.sh@20 -- # val= 00:06:45.320 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.320 11:30:15 -- accel/accel.sh@20 -- # val= 00:06:45.320 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.320 11:30:15 -- accel/accel.sh@20 -- # val= 00:06:45.320 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.320 11:30:15 -- accel/accel.sh@20 -- # val= 00:06:45.320 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.320 11:30:15 -- accel/accel.sh@20 -- # val= 00:06:45.320 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.320 11:30:15 -- accel/accel.sh@20 -- # val= 00:06:45.320 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.320 11:30:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.320 11:30:15 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:45.320 11:30:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.320 00:06:45.320 real 0m1.396s 00:06:45.320 user 0m1.276s 00:06:45.320 sys 0m0.136s 00:06:45.320 11:30:15 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.320 11:30:15 -- common/autotest_common.sh@10 -- # set +x 00:06:45.320 ************************************ 00:06:45.320 END TEST accel_copy_crc32c_C2 00:06:45.320 ************************************ 00:06:45.320 11:30:15 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:45.320 11:30:15 -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:45.320 11:30:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.320 11:30:15 -- common/autotest_common.sh@10 -- # set +x 00:06:45.320 ************************************ 00:06:45.320 START TEST accel_dualcast 00:06:45.320 ************************************ 00:06:45.320 11:30:15 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:45.320 11:30:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.320 11:30:15 -- accel/accel.sh@17 -- # local accel_module 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.320 11:30:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:45.320 11:30:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:45.320 11:30:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.320 11:30:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.320 11:30:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.320 11:30:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.320 11:30:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.320 11:30:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.320 11:30:15 -- accel/accel.sh@40 -- # local IFS=, 00:06:45.320 11:30:15 -- accel/accel.sh@41 -- # jq -r . 00:06:45.320 [2024-05-15 11:30:15.773975] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:45.320 [2024-05-15 11:30:15.774040] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903753 ] 00:06:45.320 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.320 [2024-05-15 11:30:15.844655] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.320 [2024-05-15 11:30:15.927910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.320 11:30:15 -- accel/accel.sh@20 -- # val= 00:06:45.320 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.320 11:30:15 -- accel/accel.sh@20 -- # val= 00:06:45.320 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.320 11:30:15 -- accel/accel.sh@20 -- # val=0x1 00:06:45.320 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.320 11:30:15 -- accel/accel.sh@20 -- # val= 00:06:45.320 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.320 11:30:15 -- accel/accel.sh@20 -- # val= 00:06:45.320 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.320 11:30:15 -- accel/accel.sh@20 -- # val=dualcast 00:06:45.320 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.320 11:30:15 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.320 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.321 11:30:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.321 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.321 11:30:15 -- accel/accel.sh@20 -- # val= 00:06:45.321 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.321 11:30:15 -- accel/accel.sh@20 -- # val=software 00:06:45.321 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.321 11:30:15 -- accel/accel.sh@22 -- # accel_module=software 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.321 11:30:15 -- accel/accel.sh@20 -- # val=32 00:06:45.321 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.321 11:30:15 -- accel/accel.sh@20 -- # val=32 00:06:45.321 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.321 11:30:15 -- accel/accel.sh@20 -- # val=1 00:06:45.321 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.321 11:30:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.321 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.321 11:30:15 -- accel/accel.sh@20 -- # val=Yes 00:06:45.321 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.321 11:30:15 -- accel/accel.sh@20 -- # val= 00:06:45.321 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:45.321 11:30:15 -- accel/accel.sh@20 -- # val= 00:06:45.321 11:30:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # IFS=: 00:06:45.321 11:30:15 -- accel/accel.sh@19 -- # read -r var val 00:06:46.700 11:30:17 -- accel/accel.sh@20 -- # val= 00:06:46.700 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.700 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.700 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.700 11:30:17 -- accel/accel.sh@20 -- # val= 00:06:46.700 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.700 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.700 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.700 11:30:17 -- accel/accel.sh@20 -- # val= 00:06:46.700 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.700 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.700 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.700 11:30:17 -- accel/accel.sh@20 -- # val= 00:06:46.700 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.700 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.700 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.700 11:30:17 -- accel/accel.sh@20 -- # val= 00:06:46.700 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.700 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.700 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.700 11:30:17 -- accel/accel.sh@20 -- # val= 00:06:46.700 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.700 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.700 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.700 11:30:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.700 11:30:17 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:46.700 11:30:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.700 00:06:46.700 real 0m1.399s 00:06:46.700 user 0m1.277s 00:06:46.700 sys 0m0.135s 00:06:46.700 11:30:17 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.700 11:30:17 -- common/autotest_common.sh@10 -- # set +x 00:06:46.700 ************************************ 00:06:46.700 END TEST accel_dualcast 00:06:46.700 ************************************ 00:06:46.700 11:30:17 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:46.700 11:30:17 -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:46.700 11:30:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.700 11:30:17 -- common/autotest_common.sh@10 -- # set +x 00:06:46.700 ************************************ 00:06:46.700 START TEST accel_compare 00:06:46.700 ************************************ 00:06:46.700 11:30:17 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:46.700 11:30:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.700 11:30:17 -- accel/accel.sh@17 -- # local accel_module 00:06:46.700 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.700 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.700 11:30:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:46.700 11:30:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:46.700 11:30:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.700 11:30:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.700 11:30:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.700 11:30:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.700 11:30:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.700 11:30:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.700 11:30:17 -- accel/accel.sh@40 -- # local IFS=, 00:06:46.700 11:30:17 -- accel/accel.sh@41 -- # jq -r . 00:06:46.700 [2024-05-15 11:30:17.258356] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:46.700 [2024-05-15 11:30:17.258420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903958 ] 00:06:46.700 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.700 [2024-05-15 11:30:17.328790] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.700 [2024-05-15 11:30:17.412889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.700 11:30:17 -- accel/accel.sh@20 -- # val= 00:06:46.700 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.700 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.700 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.700 11:30:17 -- accel/accel.sh@20 -- # val= 00:06:46.700 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.700 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.959 11:30:17 -- accel/accel.sh@20 -- # val=0x1 00:06:46.959 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.959 11:30:17 -- accel/accel.sh@20 -- # val= 00:06:46.959 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.959 11:30:17 -- accel/accel.sh@20 -- # val= 00:06:46.959 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.959 11:30:17 -- accel/accel.sh@20 -- # val=compare 00:06:46.959 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.959 11:30:17 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.959 11:30:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.959 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.959 11:30:17 -- accel/accel.sh@20 -- # val= 00:06:46.959 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.959 11:30:17 -- accel/accel.sh@20 -- # val=software 00:06:46.959 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.959 11:30:17 -- accel/accel.sh@22 -- # accel_module=software 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.959 11:30:17 -- accel/accel.sh@20 -- # val=32 00:06:46.959 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.959 11:30:17 -- accel/accel.sh@20 -- # val=32 00:06:46.959 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.959 11:30:17 -- accel/accel.sh@20 -- # val=1 00:06:46.959 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.959 11:30:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.959 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.959 11:30:17 -- accel/accel.sh@20 -- # val=Yes 00:06:46.959 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.959 11:30:17 -- accel/accel.sh@20 -- # val= 00:06:46.959 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:46.959 11:30:17 -- accel/accel.sh@20 -- # val= 00:06:46.959 11:30:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # IFS=: 00:06:46.959 11:30:17 -- accel/accel.sh@19 -- # read -r var val 00:06:47.896 11:30:18 -- accel/accel.sh@20 -- # val= 00:06:47.896 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.896 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:47.896 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:47.896 11:30:18 -- accel/accel.sh@20 -- # val= 00:06:47.896 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.896 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:47.896 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:47.896 11:30:18 -- accel/accel.sh@20 -- # val= 00:06:47.896 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.896 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:47.896 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:47.896 11:30:18 -- accel/accel.sh@20 -- # val= 00:06:47.896 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.896 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:47.896 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:47.896 11:30:18 -- accel/accel.sh@20 -- # val= 00:06:47.896 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.896 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:47.896 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:47.896 11:30:18 -- accel/accel.sh@20 -- # val= 00:06:47.896 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.896 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:47.896 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:47.896 11:30:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.896 11:30:18 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:47.896 11:30:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.896 00:06:47.896 real 0m1.396s 00:06:47.896 user 0m1.267s 00:06:47.896 sys 0m0.143s 00:06:47.896 11:30:18 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.896 11:30:18 -- common/autotest_common.sh@10 -- # set +x 00:06:47.896 ************************************ 00:06:47.896 END TEST accel_compare 00:06:47.896 ************************************ 00:06:48.155 11:30:18 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:48.155 11:30:18 -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:48.155 11:30:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.155 11:30:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.155 ************************************ 00:06:48.155 START TEST accel_xor 00:06:48.155 ************************************ 00:06:48.155 11:30:18 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:48.155 11:30:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.155 11:30:18 -- accel/accel.sh@17 -- # local accel_module 00:06:48.155 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.155 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:48.155 11:30:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:48.155 11:30:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:48.155 11:30:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.155 11:30:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.155 11:30:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.155 11:30:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.155 11:30:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.155 11:30:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.155 11:30:18 -- accel/accel.sh@40 -- # local IFS=, 00:06:48.155 11:30:18 -- accel/accel.sh@41 -- # jq -r . 00:06:48.155 [2024-05-15 11:30:18.747894] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:48.155 [2024-05-15 11:30:18.747954] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904161 ] 00:06:48.155 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.155 [2024-05-15 11:30:18.818982] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.155 [2024-05-15 11:30:18.904276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.484 11:30:18 -- accel/accel.sh@20 -- # val= 00:06:48.484 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:48.484 11:30:18 -- accel/accel.sh@20 -- # val= 00:06:48.484 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:48.484 11:30:18 -- accel/accel.sh@20 -- # val=0x1 00:06:48.484 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:48.484 11:30:18 -- accel/accel.sh@20 -- # val= 00:06:48.484 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:48.484 11:30:18 -- accel/accel.sh@20 -- # val= 00:06:48.484 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:48.484 11:30:18 -- accel/accel.sh@20 -- # val=xor 00:06:48.484 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.484 11:30:18 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:48.484 11:30:18 -- accel/accel.sh@20 -- # val=2 00:06:48.484 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:48.484 11:30:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.484 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:48.484 11:30:18 -- accel/accel.sh@20 -- # val= 00:06:48.484 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:48.484 11:30:18 -- accel/accel.sh@20 -- # val=software 00:06:48.484 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.484 11:30:18 -- accel/accel.sh@22 -- # accel_module=software 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:48.484 11:30:18 -- accel/accel.sh@20 -- # val=32 00:06:48.484 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:48.484 11:30:18 -- accel/accel.sh@20 -- # val=32 00:06:48.484 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:48.484 11:30:18 -- accel/accel.sh@20 -- # val=1 00:06:48.484 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:48.484 11:30:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.484 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:48.484 11:30:18 -- accel/accel.sh@20 -- # val=Yes 00:06:48.484 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:48.484 11:30:18 -- accel/accel.sh@20 -- # val= 00:06:48.484 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:48.484 11:30:18 -- accel/accel.sh@20 -- # val= 00:06:48.484 11:30:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # IFS=: 00:06:48.484 11:30:18 -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 11:30:20 -- accel/accel.sh@20 -- # val= 00:06:49.421 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 11:30:20 -- accel/accel.sh@20 -- # val= 00:06:49.421 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 11:30:20 -- accel/accel.sh@20 -- # val= 00:06:49.421 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 11:30:20 -- accel/accel.sh@20 -- # val= 00:06:49.421 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 11:30:20 -- accel/accel.sh@20 -- # val= 00:06:49.421 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 11:30:20 -- accel/accel.sh@20 -- # val= 00:06:49.421 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 11:30:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.421 11:30:20 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:49.421 11:30:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.421 00:06:49.421 real 0m1.419s 00:06:49.421 user 0m1.281s 00:06:49.421 sys 0m0.151s 00:06:49.421 11:30:20 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.421 11:30:20 -- common/autotest_common.sh@10 -- # set +x 00:06:49.421 ************************************ 00:06:49.421 END TEST accel_xor 00:06:49.421 ************************************ 00:06:49.421 11:30:20 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:49.421 11:30:20 -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:49.421 11:30:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.421 11:30:20 -- common/autotest_common.sh@10 -- # set +x 00:06:49.680 ************************************ 00:06:49.680 START TEST accel_xor 00:06:49.680 ************************************ 00:06:49.680 11:30:20 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:49.680 11:30:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.680 11:30:20 -- accel/accel.sh@17 -- # local accel_module 00:06:49.680 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.680 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.680 11:30:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:49.680 11:30:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:49.680 11:30:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.680 11:30:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.680 11:30:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.680 11:30:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.680 11:30:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.680 11:30:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.680 11:30:20 -- accel/accel.sh@40 -- # local IFS=, 00:06:49.680 11:30:20 -- accel/accel.sh@41 -- # jq -r . 00:06:49.680 [2024-05-15 11:30:20.255410] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:49.680 [2024-05-15 11:30:20.255472] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904366 ] 00:06:49.680 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.680 [2024-05-15 11:30:20.331062] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.680 [2024-05-15 11:30:20.424274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.939 11:30:20 -- accel/accel.sh@20 -- # val= 00:06:49.939 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.939 11:30:20 -- accel/accel.sh@20 -- # val= 00:06:49.939 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.939 11:30:20 -- accel/accel.sh@20 -- # val=0x1 00:06:49.939 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.939 11:30:20 -- accel/accel.sh@20 -- # val= 00:06:49.939 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.939 11:30:20 -- accel/accel.sh@20 -- # val= 00:06:49.939 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.939 11:30:20 -- accel/accel.sh@20 -- # val=xor 00:06:49.939 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.939 11:30:20 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.939 11:30:20 -- accel/accel.sh@20 -- # val=3 00:06:49.939 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.939 11:30:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.939 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.939 11:30:20 -- accel/accel.sh@20 -- # val= 00:06:49.939 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.939 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.940 11:30:20 -- accel/accel.sh@20 -- # val=software 00:06:49.940 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.940 11:30:20 -- accel/accel.sh@22 -- # accel_module=software 00:06:49.940 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.940 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.940 11:30:20 -- accel/accel.sh@20 -- # val=32 00:06:49.940 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.940 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.940 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.940 11:30:20 -- accel/accel.sh@20 -- # val=32 00:06:49.940 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.940 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.940 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.940 11:30:20 -- accel/accel.sh@20 -- # val=1 00:06:49.940 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.940 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.940 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.940 11:30:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.940 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.940 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.940 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.940 11:30:20 -- accel/accel.sh@20 -- # val=Yes 00:06:49.940 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.940 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.940 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.940 11:30:20 -- accel/accel.sh@20 -- # val= 00:06:49.940 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.940 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.940 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:49.940 11:30:20 -- accel/accel.sh@20 -- # val= 00:06:49.940 11:30:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.940 11:30:20 -- accel/accel.sh@19 -- # IFS=: 00:06:49.940 11:30:20 -- accel/accel.sh@19 -- # read -r var val 00:06:51.318 11:30:21 -- accel/accel.sh@20 -- # val= 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val= 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val= 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val= 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val= 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val= 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.319 11:30:21 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:51.319 11:30:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.319 00:06:51.319 real 0m1.427s 00:06:51.319 user 0m1.285s 00:06:51.319 sys 0m0.155s 00:06:51.319 11:30:21 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.319 11:30:21 -- common/autotest_common.sh@10 -- # set +x 00:06:51.319 ************************************ 00:06:51.319 END TEST accel_xor 00:06:51.319 ************************************ 00:06:51.319 11:30:21 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:51.319 11:30:21 -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:51.319 11:30:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.319 11:30:21 -- common/autotest_common.sh@10 -- # set +x 00:06:51.319 ************************************ 00:06:51.319 START TEST accel_dif_verify 00:06:51.319 ************************************ 00:06:51.319 11:30:21 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:51.319 11:30:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.319 11:30:21 -- accel/accel.sh@17 -- # local accel_module 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:51.319 11:30:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:51.319 11:30:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.319 11:30:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.319 11:30:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.319 11:30:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.319 11:30:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.319 11:30:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.319 11:30:21 -- accel/accel.sh@40 -- # local IFS=, 00:06:51.319 11:30:21 -- accel/accel.sh@41 -- # jq -r . 00:06:51.319 [2024-05-15 11:30:21.774817] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:51.319 [2024-05-15 11:30:21.774884] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904569 ] 00:06:51.319 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.319 [2024-05-15 11:30:21.846690] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.319 [2024-05-15 11:30:21.932416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val= 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val= 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val=0x1 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val= 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val= 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val=dif_verify 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val= 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val=software 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@22 -- # accel_module=software 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val=32 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val=32 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val=1 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val=No 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val= 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:51.319 11:30:21 -- accel/accel.sh@20 -- # val= 00:06:51.319 11:30:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # IFS=: 00:06:51.319 11:30:21 -- accel/accel.sh@19 -- # read -r var val 00:06:52.693 11:30:23 -- accel/accel.sh@20 -- # val= 00:06:52.693 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.693 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.693 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.693 11:30:23 -- accel/accel.sh@20 -- # val= 00:06:52.693 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.693 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.693 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.693 11:30:23 -- accel/accel.sh@20 -- # val= 00:06:52.693 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.693 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.693 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.693 11:30:23 -- accel/accel.sh@20 -- # val= 00:06:52.693 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.693 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.693 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.693 11:30:23 -- accel/accel.sh@20 -- # val= 00:06:52.693 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.693 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.693 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.693 11:30:23 -- accel/accel.sh@20 -- # val= 00:06:52.693 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.693 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.693 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.693 11:30:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.693 11:30:23 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:52.693 11:30:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.693 00:06:52.693 real 0m1.414s 00:06:52.693 user 0m1.285s 00:06:52.694 sys 0m0.144s 00:06:52.694 11:30:23 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.694 11:30:23 -- common/autotest_common.sh@10 -- # set +x 00:06:52.694 ************************************ 00:06:52.694 END TEST accel_dif_verify 00:06:52.694 ************************************ 00:06:52.694 11:30:23 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:52.694 11:30:23 -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:52.694 11:30:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.694 11:30:23 -- common/autotest_common.sh@10 -- # set +x 00:06:52.694 ************************************ 00:06:52.694 START TEST accel_dif_generate 00:06:52.694 ************************************ 00:06:52.694 11:30:23 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:52.694 11:30:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.694 11:30:23 -- accel/accel.sh@17 -- # local accel_module 00:06:52.694 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.694 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.694 11:30:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:52.694 11:30:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:52.694 11:30:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.694 11:30:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.694 11:30:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.694 11:30:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.694 11:30:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.694 11:30:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.694 11:30:23 -- accel/accel.sh@40 -- # local IFS=, 00:06:52.694 11:30:23 -- accel/accel.sh@41 -- # jq -r . 00:06:52.694 [2024-05-15 11:30:23.277943] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:52.694 [2024-05-15 11:30:23.278002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904770 ] 00:06:52.694 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.694 [2024-05-15 11:30:23.348879] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.694 [2024-05-15 11:30:23.432687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val= 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val= 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val=0x1 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val= 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val= 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val=dif_generate 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val= 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val=software 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@22 -- # accel_module=software 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val=32 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val=32 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val=1 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val=No 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val= 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:52.953 11:30:23 -- accel/accel.sh@20 -- # val= 00:06:52.953 11:30:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # IFS=: 00:06:52.953 11:30:23 -- accel/accel.sh@19 -- # read -r var val 00:06:53.889 11:30:24 -- accel/accel.sh@20 -- # val= 00:06:53.889 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.889 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:53.889 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:53.889 11:30:24 -- accel/accel.sh@20 -- # val= 00:06:53.889 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.889 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:53.889 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:53.889 11:30:24 -- accel/accel.sh@20 -- # val= 00:06:53.889 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.889 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:53.889 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:53.889 11:30:24 -- accel/accel.sh@20 -- # val= 00:06:53.889 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.889 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:53.889 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:53.889 11:30:24 -- accel/accel.sh@20 -- # val= 00:06:53.889 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.889 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:53.889 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:53.889 11:30:24 -- accel/accel.sh@20 -- # val= 00:06:53.889 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.889 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:53.889 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:53.889 11:30:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.889 11:30:24 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:53.889 11:30:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.889 00:06:53.889 real 0m1.402s 00:06:53.889 user 0m1.288s 00:06:53.889 sys 0m0.128s 00:06:54.149 11:30:24 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.149 11:30:24 -- common/autotest_common.sh@10 -- # set +x 00:06:54.149 ************************************ 00:06:54.149 END TEST accel_dif_generate 00:06:54.149 ************************************ 00:06:54.149 11:30:24 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:54.149 11:30:24 -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:54.149 11:30:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.149 11:30:24 -- common/autotest_common.sh@10 -- # set +x 00:06:54.149 ************************************ 00:06:54.149 START TEST accel_dif_generate_copy 00:06:54.149 ************************************ 00:06:54.149 11:30:24 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:54.149 11:30:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.149 11:30:24 -- accel/accel.sh@17 -- # local accel_module 00:06:54.149 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.149 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:54.149 11:30:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:54.149 11:30:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.149 11:30:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:54.149 11:30:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.149 11:30:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.149 11:30:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.149 11:30:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.149 11:30:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.149 11:30:24 -- accel/accel.sh@40 -- # local IFS=, 00:06:54.149 11:30:24 -- accel/accel.sh@41 -- # jq -r . 00:06:54.149 [2024-05-15 11:30:24.770195] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:54.149 [2024-05-15 11:30:24.770255] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904972 ] 00:06:54.149 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.149 [2024-05-15 11:30:24.840830] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.409 [2024-05-15 11:30:24.928501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.409 11:30:24 -- accel/accel.sh@20 -- # val= 00:06:54.409 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:54.409 11:30:24 -- accel/accel.sh@20 -- # val= 00:06:54.409 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:54.409 11:30:24 -- accel/accel.sh@20 -- # val=0x1 00:06:54.409 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:54.409 11:30:24 -- accel/accel.sh@20 -- # val= 00:06:54.409 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:54.409 11:30:24 -- accel/accel.sh@20 -- # val= 00:06:54.409 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:54.409 11:30:24 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:54.409 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.409 11:30:24 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:54.409 11:30:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.409 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:54.409 11:30:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.409 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:54.409 11:30:24 -- accel/accel.sh@20 -- # val= 00:06:54.409 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:54.409 11:30:24 -- accel/accel.sh@20 -- # val=software 00:06:54.409 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.409 11:30:24 -- accel/accel.sh@22 -- # accel_module=software 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:54.409 11:30:24 -- accel/accel.sh@20 -- # val=32 00:06:54.409 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:54.409 11:30:24 -- accel/accel.sh@20 -- # val=32 00:06:54.409 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:54.409 11:30:24 -- accel/accel.sh@20 -- # val=1 00:06:54.409 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:54.409 11:30:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.409 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:54.409 11:30:24 -- accel/accel.sh@20 -- # val=No 00:06:54.409 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:54.409 11:30:24 -- accel/accel.sh@20 -- # val= 00:06:54.409 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:54.409 11:30:24 -- accel/accel.sh@20 -- # val= 00:06:54.409 11:30:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # IFS=: 00:06:54.409 11:30:24 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@20 -- # val= 00:06:55.788 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@20 -- # val= 00:06:55.788 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@20 -- # val= 00:06:55.788 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@20 -- # val= 00:06:55.788 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@20 -- # val= 00:06:55.788 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@20 -- # val= 00:06:55.788 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.788 11:30:26 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:55.788 11:30:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.788 00:06:55.788 real 0m1.405s 00:06:55.788 user 0m1.281s 00:06:55.788 sys 0m0.137s 00:06:55.788 11:30:26 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.788 11:30:26 -- common/autotest_common.sh@10 -- # set +x 00:06:55.788 ************************************ 00:06:55.788 END TEST accel_dif_generate_copy 00:06:55.788 ************************************ 00:06:55.788 11:30:26 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:55.788 11:30:26 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:55.788 11:30:26 -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:55.788 11:30:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.788 11:30:26 -- common/autotest_common.sh@10 -- # set +x 00:06:55.788 ************************************ 00:06:55.788 START TEST accel_comp 00:06:55.788 ************************************ 00:06:55.788 11:30:26 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:55.788 11:30:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.788 11:30:26 -- accel/accel.sh@17 -- # local accel_module 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.788 11:30:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:55.788 11:30:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.788 11:30:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.788 11:30:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.788 11:30:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.788 11:30:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.788 11:30:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.788 11:30:26 -- accel/accel.sh@40 -- # local IFS=, 00:06:55.788 11:30:26 -- accel/accel.sh@41 -- # jq -r . 00:06:55.788 [2024-05-15 11:30:26.255260] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:55.788 [2024-05-15 11:30:26.255312] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2905178 ] 00:06:55.788 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.788 [2024-05-15 11:30:26.330410] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.788 [2024-05-15 11:30:26.427525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.788 11:30:26 -- accel/accel.sh@20 -- # val= 00:06:55.788 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@20 -- # val= 00:06:55.788 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@20 -- # val= 00:06:55.788 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@20 -- # val=0x1 00:06:55.788 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@20 -- # val= 00:06:55.788 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@20 -- # val= 00:06:55.788 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@20 -- # val=compress 00:06:55.788 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.788 11:30:26 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.788 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@20 -- # val= 00:06:55.788 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@20 -- # val=software 00:06:55.788 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.788 11:30:26 -- accel/accel.sh@22 -- # accel_module=software 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.788 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.788 11:30:26 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:55.788 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.789 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.789 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.789 11:30:26 -- accel/accel.sh@20 -- # val=32 00:06:55.789 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.789 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.789 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.789 11:30:26 -- accel/accel.sh@20 -- # val=32 00:06:55.789 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.789 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.789 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.789 11:30:26 -- accel/accel.sh@20 -- # val=1 00:06:55.789 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.789 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.789 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.789 11:30:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.789 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.789 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.789 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.789 11:30:26 -- accel/accel.sh@20 -- # val=No 00:06:55.789 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.789 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.789 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.789 11:30:26 -- accel/accel.sh@20 -- # val= 00:06:55.789 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.789 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.789 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:55.789 11:30:26 -- accel/accel.sh@20 -- # val= 00:06:55.789 11:30:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.789 11:30:26 -- accel/accel.sh@19 -- # IFS=: 00:06:55.789 11:30:26 -- accel/accel.sh@19 -- # read -r var val 00:06:57.166 11:30:27 -- accel/accel.sh@20 -- # val= 00:06:57.166 11:30:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.166 11:30:27 -- accel/accel.sh@19 -- # IFS=: 00:06:57.166 11:30:27 -- accel/accel.sh@19 -- # read -r var val 00:06:57.166 11:30:27 -- accel/accel.sh@20 -- # val= 00:06:57.166 11:30:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.166 11:30:27 -- accel/accel.sh@19 -- # IFS=: 00:06:57.166 11:30:27 -- accel/accel.sh@19 -- # read -r var val 00:06:57.166 11:30:27 -- accel/accel.sh@20 -- # val= 00:06:57.166 11:30:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.166 11:30:27 -- accel/accel.sh@19 -- # IFS=: 00:06:57.166 11:30:27 -- accel/accel.sh@19 -- # read -r var val 00:06:57.166 11:30:27 -- accel/accel.sh@20 -- # val= 00:06:57.166 11:30:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.166 11:30:27 -- accel/accel.sh@19 -- # IFS=: 00:06:57.166 11:30:27 -- accel/accel.sh@19 -- # read -r var val 00:06:57.166 11:30:27 -- accel/accel.sh@20 -- # val= 00:06:57.166 11:30:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.166 11:30:27 -- accel/accel.sh@19 -- # IFS=: 00:06:57.166 11:30:27 -- accel/accel.sh@19 -- # read -r var val 00:06:57.166 11:30:27 -- accel/accel.sh@20 -- # val= 00:06:57.166 11:30:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.166 11:30:27 -- accel/accel.sh@19 -- # IFS=: 00:06:57.166 11:30:27 -- accel/accel.sh@19 -- # read -r var val 00:06:57.166 11:30:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.166 11:30:27 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:57.166 11:30:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.166 00:06:57.166 real 0m1.427s 00:06:57.166 user 0m1.296s 00:06:57.166 sys 0m0.146s 00:06:57.166 11:30:27 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.166 11:30:27 -- common/autotest_common.sh@10 -- # set +x 00:06:57.166 ************************************ 00:06:57.166 END TEST accel_comp 00:06:57.166 ************************************ 00:06:57.166 11:30:27 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:57.166 11:30:27 -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:57.166 11:30:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.166 11:30:27 -- common/autotest_common.sh@10 -- # set +x 00:06:57.166 ************************************ 00:06:57.166 START TEST accel_decomp 00:06:57.166 ************************************ 00:06:57.166 11:30:27 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:57.166 11:30:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.166 11:30:27 -- accel/accel.sh@17 -- # local accel_module 00:06:57.166 11:30:27 -- accel/accel.sh@19 -- # IFS=: 00:06:57.166 11:30:27 -- accel/accel.sh@19 -- # read -r var val 00:06:57.166 11:30:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:57.166 11:30:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:57.166 11:30:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.166 11:30:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.166 11:30:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.166 11:30:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.166 11:30:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.166 11:30:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.166 11:30:27 -- accel/accel.sh@40 -- # local IFS=, 00:06:57.166 11:30:27 -- accel/accel.sh@41 -- # jq -r . 00:06:57.166 [2024-05-15 11:30:27.778163] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:57.166 [2024-05-15 11:30:27.778226] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2905422 ] 00:06:57.166 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.166 [2024-05-15 11:30:27.849670] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.426 [2024-05-15 11:30:27.941224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.426 11:30:27 -- accel/accel.sh@20 -- # val= 00:06:57.426 11:30:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:27 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:27 -- accel/accel.sh@19 -- # read -r var val 00:06:57.426 11:30:27 -- accel/accel.sh@20 -- # val= 00:06:57.426 11:30:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:27 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:27 -- accel/accel.sh@19 -- # read -r var val 00:06:57.426 11:30:27 -- accel/accel.sh@20 -- # val= 00:06:57.426 11:30:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:27 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:27 -- accel/accel.sh@19 -- # read -r var val 00:06:57.426 11:30:27 -- accel/accel.sh@20 -- # val=0x1 00:06:57.426 11:30:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:27 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:27 -- accel/accel.sh@19 -- # read -r var val 00:06:57.426 11:30:27 -- accel/accel.sh@20 -- # val= 00:06:57.426 11:30:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:27 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:27 -- accel/accel.sh@19 -- # read -r var val 00:06:57.426 11:30:27 -- accel/accel.sh@20 -- # val= 00:06:57.426 11:30:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:27 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:27 -- accel/accel.sh@19 -- # read -r var val 00:06:57.426 11:30:27 -- accel/accel.sh@20 -- # val=decompress 00:06:57.426 11:30:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:27 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:57.426 11:30:27 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:27 -- accel/accel.sh@19 -- # read -r var val 00:06:57.426 11:30:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.426 11:30:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:27 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # read -r var val 00:06:57.426 11:30:28 -- accel/accel.sh@20 -- # val= 00:06:57.426 11:30:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # read -r var val 00:06:57.426 11:30:28 -- accel/accel.sh@20 -- # val=software 00:06:57.426 11:30:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:28 -- accel/accel.sh@22 -- # accel_module=software 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # read -r var val 00:06:57.426 11:30:28 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:57.426 11:30:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # read -r var val 00:06:57.426 11:30:28 -- accel/accel.sh@20 -- # val=32 00:06:57.426 11:30:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # read -r var val 00:06:57.426 11:30:28 -- accel/accel.sh@20 -- # val=32 00:06:57.426 11:30:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # read -r var val 00:06:57.426 11:30:28 -- accel/accel.sh@20 -- # val=1 00:06:57.426 11:30:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # read -r var val 00:06:57.426 11:30:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.426 11:30:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # read -r var val 00:06:57.426 11:30:28 -- accel/accel.sh@20 -- # val=Yes 00:06:57.426 11:30:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # read -r var val 00:06:57.426 11:30:28 -- accel/accel.sh@20 -- # val= 00:06:57.426 11:30:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # read -r var val 00:06:57.426 11:30:28 -- accel/accel.sh@20 -- # val= 00:06:57.426 11:30:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # IFS=: 00:06:57.426 11:30:28 -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 11:30:29 -- accel/accel.sh@20 -- # val= 00:06:58.804 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 11:30:29 -- accel/accel.sh@20 -- # val= 00:06:58.804 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 11:30:29 -- accel/accel.sh@20 -- # val= 00:06:58.804 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.804 11:30:29 -- accel/accel.sh@20 -- # val= 00:06:58.804 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.804 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.804 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val= 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val= 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.805 11:30:29 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:58.805 11:30:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.805 00:06:58.805 real 0m1.424s 00:06:58.805 user 0m1.295s 00:06:58.805 sys 0m0.143s 00:06:58.805 11:30:29 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.805 11:30:29 -- common/autotest_common.sh@10 -- # set +x 00:06:58.805 ************************************ 00:06:58.805 END TEST accel_decomp 00:06:58.805 ************************************ 00:06:58.805 11:30:29 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:58.805 11:30:29 -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:58.805 11:30:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.805 11:30:29 -- common/autotest_common.sh@10 -- # set +x 00:06:58.805 ************************************ 00:06:58.805 START TEST accel_decmop_full 00:06:58.805 ************************************ 00:06:58.805 11:30:29 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:58.805 11:30:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.805 11:30:29 -- accel/accel.sh@17 -- # local accel_module 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:58.805 11:30:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.805 11:30:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.805 11:30:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.805 11:30:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.805 11:30:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.805 11:30:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.805 11:30:29 -- accel/accel.sh@40 -- # local IFS=, 00:06:58.805 11:30:29 -- accel/accel.sh@41 -- # jq -r . 00:06:58.805 [2024-05-15 11:30:29.289222] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:06:58.805 [2024-05-15 11:30:29.289274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2905697 ] 00:06:58.805 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.805 [2024-05-15 11:30:29.360387] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.805 [2024-05-15 11:30:29.446423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val= 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val= 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val= 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val=0x1 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val= 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val= 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val=decompress 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val= 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val=software 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@22 -- # accel_module=software 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val=32 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val=32 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val=1 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val=Yes 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val= 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:06:58.805 11:30:29 -- accel/accel.sh@20 -- # val= 00:06:58.805 11:30:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # IFS=: 00:06:58.805 11:30:29 -- accel/accel.sh@19 -- # read -r var val 00:07:00.182 11:30:30 -- accel/accel.sh@20 -- # val= 00:07:00.182 11:30:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.182 11:30:30 -- accel/accel.sh@19 -- # IFS=: 00:07:00.182 11:30:30 -- accel/accel.sh@19 -- # read -r var val 00:07:00.182 11:30:30 -- accel/accel.sh@20 -- # val= 00:07:00.182 11:30:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.182 11:30:30 -- accel/accel.sh@19 -- # IFS=: 00:07:00.182 11:30:30 -- accel/accel.sh@19 -- # read -r var val 00:07:00.182 11:30:30 -- accel/accel.sh@20 -- # val= 00:07:00.182 11:30:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.182 11:30:30 -- accel/accel.sh@19 -- # IFS=: 00:07:00.182 11:30:30 -- accel/accel.sh@19 -- # read -r var val 00:07:00.182 11:30:30 -- accel/accel.sh@20 -- # val= 00:07:00.182 11:30:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.182 11:30:30 -- accel/accel.sh@19 -- # IFS=: 00:07:00.182 11:30:30 -- accel/accel.sh@19 -- # read -r var val 00:07:00.182 11:30:30 -- accel/accel.sh@20 -- # val= 00:07:00.182 11:30:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.182 11:30:30 -- accel/accel.sh@19 -- # IFS=: 00:07:00.182 11:30:30 -- accel/accel.sh@19 -- # read -r var val 00:07:00.182 11:30:30 -- accel/accel.sh@20 -- # val= 00:07:00.182 11:30:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.182 11:30:30 -- accel/accel.sh@19 -- # IFS=: 00:07:00.182 11:30:30 -- accel/accel.sh@19 -- # read -r var val 00:07:00.182 11:30:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.182 11:30:30 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:00.182 11:30:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.182 00:07:00.182 real 0m1.430s 00:07:00.182 user 0m1.304s 00:07:00.182 sys 0m0.139s 00:07:00.182 11:30:30 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.182 11:30:30 -- common/autotest_common.sh@10 -- # set +x 00:07:00.182 ************************************ 00:07:00.182 END TEST accel_decmop_full 00:07:00.182 ************************************ 00:07:00.182 11:30:30 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:00.182 11:30:30 -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:00.182 11:30:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.182 11:30:30 -- common/autotest_common.sh@10 -- # set +x 00:07:00.182 ************************************ 00:07:00.182 START TEST accel_decomp_mcore 00:07:00.182 ************************************ 00:07:00.182 11:30:30 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:00.182 11:30:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.182 11:30:30 -- accel/accel.sh@17 -- # local accel_module 00:07:00.182 11:30:30 -- accel/accel.sh@19 -- # IFS=: 00:07:00.182 11:30:30 -- accel/accel.sh@19 -- # read -r var val 00:07:00.183 11:30:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:00.183 11:30:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:00.183 11:30:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.183 11:30:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.183 11:30:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.183 11:30:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.183 11:30:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.183 11:30:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.183 11:30:30 -- accel/accel.sh@40 -- # local IFS=, 00:07:00.183 11:30:30 -- accel/accel.sh@41 -- # jq -r . 00:07:00.183 [2024-05-15 11:30:30.794511] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:07:00.183 [2024-05-15 11:30:30.794566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2905951 ] 00:07:00.183 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.183 [2024-05-15 11:30:30.865902] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.442 [2024-05-15 11:30:30.955678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.442 [2024-05-15 11:30:30.955763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.442 [2024-05-15 11:30:30.955841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.442 [2024-05-15 11:30:30.955843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.442 11:30:30 -- accel/accel.sh@20 -- # val= 00:07:00.442 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:00.442 11:30:31 -- accel/accel.sh@20 -- # val= 00:07:00.442 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:00.442 11:30:31 -- accel/accel.sh@20 -- # val= 00:07:00.442 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:00.442 11:30:31 -- accel/accel.sh@20 -- # val=0xf 00:07:00.442 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:00.442 11:30:31 -- accel/accel.sh@20 -- # val= 00:07:00.442 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:00.442 11:30:31 -- accel/accel.sh@20 -- # val= 00:07:00.442 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:00.442 11:30:31 -- accel/accel.sh@20 -- # val=decompress 00:07:00.442 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.442 11:30:31 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:00.442 11:30:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.442 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:00.442 11:30:31 -- accel/accel.sh@20 -- # val= 00:07:00.442 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.442 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:00.443 11:30:31 -- accel/accel.sh@20 -- # val=software 00:07:00.443 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.443 11:30:31 -- accel/accel.sh@22 -- # accel_module=software 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:00.443 11:30:31 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:00.443 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:00.443 11:30:31 -- accel/accel.sh@20 -- # val=32 00:07:00.443 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:00.443 11:30:31 -- accel/accel.sh@20 -- # val=32 00:07:00.443 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:00.443 11:30:31 -- accel/accel.sh@20 -- # val=1 00:07:00.443 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:00.443 11:30:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.443 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:00.443 11:30:31 -- accel/accel.sh@20 -- # val=Yes 00:07:00.443 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:00.443 11:30:31 -- accel/accel.sh@20 -- # val= 00:07:00.443 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:00.443 11:30:31 -- accel/accel.sh@20 -- # val= 00:07:00.443 11:30:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # IFS=: 00:07:00.443 11:30:31 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val= 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val= 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val= 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val= 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val= 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val= 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val= 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val= 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val= 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.821 11:30:32 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:01.821 11:30:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.821 00:07:01.821 real 0m1.414s 00:07:01.821 user 0m4.635s 00:07:01.821 sys 0m0.143s 00:07:01.821 11:30:32 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.821 11:30:32 -- common/autotest_common.sh@10 -- # set +x 00:07:01.821 ************************************ 00:07:01.821 END TEST accel_decomp_mcore 00:07:01.821 ************************************ 00:07:01.821 11:30:32 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.821 11:30:32 -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:01.821 11:30:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.821 11:30:32 -- common/autotest_common.sh@10 -- # set +x 00:07:01.821 ************************************ 00:07:01.821 START TEST accel_decomp_full_mcore 00:07:01.821 ************************************ 00:07:01.821 11:30:32 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.821 11:30:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.821 11:30:32 -- accel/accel.sh@17 -- # local accel_module 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.821 11:30:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.821 11:30:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.821 11:30:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.821 11:30:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.821 11:30:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.821 11:30:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.821 11:30:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.821 11:30:32 -- accel/accel.sh@40 -- # local IFS=, 00:07:01.821 11:30:32 -- accel/accel.sh@41 -- # jq -r . 00:07:01.821 [2024-05-15 11:30:32.283291] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:07:01.821 [2024-05-15 11:30:32.283368] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2906162 ] 00:07:01.821 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.821 [2024-05-15 11:30:32.355044] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.821 [2024-05-15 11:30:32.442440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.821 [2024-05-15 11:30:32.442531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.821 [2024-05-15 11:30:32.442613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.821 [2024-05-15 11:30:32.442614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val= 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val= 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val= 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val=0xf 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val= 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val= 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val=decompress 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val= 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val=software 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@22 -- # accel_module=software 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.821 11:30:32 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:01.821 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.821 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.822 11:30:32 -- accel/accel.sh@20 -- # val=32 00:07:01.822 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.822 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.822 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.822 11:30:32 -- accel/accel.sh@20 -- # val=32 00:07:01.822 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.822 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.822 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.822 11:30:32 -- accel/accel.sh@20 -- # val=1 00:07:01.822 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.822 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.822 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.822 11:30:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.822 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.822 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.822 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.822 11:30:32 -- accel/accel.sh@20 -- # val=Yes 00:07:01.822 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.822 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.822 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.822 11:30:32 -- accel/accel.sh@20 -- # val= 00:07:01.822 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.822 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.822 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:01.822 11:30:32 -- accel/accel.sh@20 -- # val= 00:07:01.822 11:30:32 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.822 11:30:32 -- accel/accel.sh@19 -- # IFS=: 00:07:01.822 11:30:32 -- accel/accel.sh@19 -- # read -r var val 00:07:03.200 11:30:33 -- accel/accel.sh@20 -- # val= 00:07:03.200 11:30:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # IFS=: 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # read -r var val 00:07:03.200 11:30:33 -- accel/accel.sh@20 -- # val= 00:07:03.200 11:30:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # IFS=: 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # read -r var val 00:07:03.200 11:30:33 -- accel/accel.sh@20 -- # val= 00:07:03.200 11:30:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # IFS=: 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # read -r var val 00:07:03.200 11:30:33 -- accel/accel.sh@20 -- # val= 00:07:03.200 11:30:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # IFS=: 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # read -r var val 00:07:03.200 11:30:33 -- accel/accel.sh@20 -- # val= 00:07:03.200 11:30:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # IFS=: 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # read -r var val 00:07:03.200 11:30:33 -- accel/accel.sh@20 -- # val= 00:07:03.200 11:30:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # IFS=: 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # read -r var val 00:07:03.200 11:30:33 -- accel/accel.sh@20 -- # val= 00:07:03.200 11:30:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # IFS=: 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # read -r var val 00:07:03.200 11:30:33 -- accel/accel.sh@20 -- # val= 00:07:03.200 11:30:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # IFS=: 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # read -r var val 00:07:03.200 11:30:33 -- accel/accel.sh@20 -- # val= 00:07:03.200 11:30:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # IFS=: 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # read -r var val 00:07:03.200 11:30:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.200 11:30:33 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:03.200 11:30:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.200 00:07:03.200 real 0m1.431s 00:07:03.200 user 0m4.679s 00:07:03.200 sys 0m0.153s 00:07:03.200 11:30:33 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.200 11:30:33 -- common/autotest_common.sh@10 -- # set +x 00:07:03.200 ************************************ 00:07:03.200 END TEST accel_decomp_full_mcore 00:07:03.200 ************************************ 00:07:03.200 11:30:33 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:03.200 11:30:33 -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:03.200 11:30:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.200 11:30:33 -- common/autotest_common.sh@10 -- # set +x 00:07:03.200 ************************************ 00:07:03.200 START TEST accel_decomp_mthread 00:07:03.200 ************************************ 00:07:03.200 11:30:33 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:03.200 11:30:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.200 11:30:33 -- accel/accel.sh@17 -- # local accel_module 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # IFS=: 00:07:03.200 11:30:33 -- accel/accel.sh@19 -- # read -r var val 00:07:03.200 11:30:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:03.201 11:30:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:03.201 11:30:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.201 11:30:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.201 11:30:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.201 11:30:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.201 11:30:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.201 11:30:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.201 11:30:33 -- accel/accel.sh@40 -- # local IFS=, 00:07:03.201 11:30:33 -- accel/accel.sh@41 -- # jq -r . 00:07:03.201 [2024-05-15 11:30:33.792467] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:07:03.201 [2024-05-15 11:30:33.792520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2906362 ] 00:07:03.201 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.201 [2024-05-15 11:30:33.863112] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.201 [2024-05-15 11:30:33.948479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.460 11:30:33 -- accel/accel.sh@20 -- # val= 00:07:03.460 11:30:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:33 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:33 -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 11:30:33 -- accel/accel.sh@20 -- # val= 00:07:03.460 11:30:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:33 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 11:30:34 -- accel/accel.sh@20 -- # val= 00:07:03.460 11:30:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 11:30:34 -- accel/accel.sh@20 -- # val=0x1 00:07:03.460 11:30:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 11:30:34 -- accel/accel.sh@20 -- # val= 00:07:03.460 11:30:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 11:30:34 -- accel/accel.sh@20 -- # val= 00:07:03.460 11:30:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 11:30:34 -- accel/accel.sh@20 -- # val=decompress 00:07:03.460 11:30:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:34 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 11:30:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.460 11:30:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 11:30:34 -- accel/accel.sh@20 -- # val= 00:07:03.460 11:30:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 11:30:34 -- accel/accel.sh@20 -- # val=software 00:07:03.460 11:30:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:34 -- accel/accel.sh@22 -- # accel_module=software 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 11:30:34 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:03.460 11:30:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 11:30:34 -- accel/accel.sh@20 -- # val=32 00:07:03.460 11:30:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 11:30:34 -- accel/accel.sh@20 -- # val=32 00:07:03.460 11:30:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 11:30:34 -- accel/accel.sh@20 -- # val=2 00:07:03.460 11:30:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 11:30:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.460 11:30:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 11:30:34 -- accel/accel.sh@20 -- # val=Yes 00:07:03.460 11:30:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 11:30:34 -- accel/accel.sh@20 -- # val= 00:07:03.460 11:30:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # read -r var val 00:07:03.460 11:30:34 -- accel/accel.sh@20 -- # val= 00:07:03.460 11:30:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # IFS=: 00:07:03.460 11:30:34 -- accel/accel.sh@19 -- # read -r var val 00:07:04.838 11:30:35 -- accel/accel.sh@20 -- # val= 00:07:04.838 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.838 11:30:35 -- accel/accel.sh@20 -- # val= 00:07:04.838 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.838 11:30:35 -- accel/accel.sh@20 -- # val= 00:07:04.838 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.838 11:30:35 -- accel/accel.sh@20 -- # val= 00:07:04.838 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.838 11:30:35 -- accel/accel.sh@20 -- # val= 00:07:04.838 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.838 11:30:35 -- accel/accel.sh@20 -- # val= 00:07:04.838 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.838 11:30:35 -- accel/accel.sh@20 -- # val= 00:07:04.838 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.838 11:30:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.838 11:30:35 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:04.838 11:30:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.838 00:07:04.838 real 0m1.411s 00:07:04.838 user 0m1.281s 00:07:04.838 sys 0m0.143s 00:07:04.838 11:30:35 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.838 11:30:35 -- common/autotest_common.sh@10 -- # set +x 00:07:04.838 ************************************ 00:07:04.838 END TEST accel_decomp_mthread 00:07:04.838 ************************************ 00:07:04.838 11:30:35 -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:04.838 11:30:35 -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:04.838 11:30:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.838 11:30:35 -- common/autotest_common.sh@10 -- # set +x 00:07:04.838 ************************************ 00:07:04.838 START TEST accel_decomp_full_mthread 00:07:04.838 ************************************ 00:07:04.838 11:30:35 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:04.838 11:30:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.838 11:30:35 -- accel/accel.sh@17 -- # local accel_module 00:07:04.838 11:30:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.838 11:30:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:04.838 11:30:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.838 11:30:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.838 11:30:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.838 11:30:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.838 11:30:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.838 11:30:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.838 11:30:35 -- accel/accel.sh@40 -- # local IFS=, 00:07:04.838 11:30:35 -- accel/accel.sh@41 -- # jq -r . 00:07:04.838 [2024-05-15 11:30:35.280269] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:07:04.838 [2024-05-15 11:30:35.280312] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2906570 ] 00:07:04.838 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.838 [2024-05-15 11:30:35.349882] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.838 [2024-05-15 11:30:35.435359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.838 11:30:35 -- accel/accel.sh@20 -- # val= 00:07:04.838 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.838 11:30:35 -- accel/accel.sh@20 -- # val= 00:07:04.838 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.838 11:30:35 -- accel/accel.sh@20 -- # val= 00:07:04.838 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.838 11:30:35 -- accel/accel.sh@20 -- # val=0x1 00:07:04.838 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.838 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.839 11:30:35 -- accel/accel.sh@20 -- # val= 00:07:04.839 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.839 11:30:35 -- accel/accel.sh@20 -- # val= 00:07:04.839 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.839 11:30:35 -- accel/accel.sh@20 -- # val=decompress 00:07:04.839 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.839 11:30:35 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.839 11:30:35 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:04.839 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.839 11:30:35 -- accel/accel.sh@20 -- # val= 00:07:04.839 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.839 11:30:35 -- accel/accel.sh@20 -- # val=software 00:07:04.839 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.839 11:30:35 -- accel/accel.sh@22 -- # accel_module=software 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.839 11:30:35 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:04.839 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.839 11:30:35 -- accel/accel.sh@20 -- # val=32 00:07:04.839 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.839 11:30:35 -- accel/accel.sh@20 -- # val=32 00:07:04.839 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.839 11:30:35 -- accel/accel.sh@20 -- # val=2 00:07:04.839 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.839 11:30:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.839 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.839 11:30:35 -- accel/accel.sh@20 -- # val=Yes 00:07:04.839 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.839 11:30:35 -- accel/accel.sh@20 -- # val= 00:07:04.839 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:04.839 11:30:35 -- accel/accel.sh@20 -- # val= 00:07:04.839 11:30:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # IFS=: 00:07:04.839 11:30:35 -- accel/accel.sh@19 -- # read -r var val 00:07:06.217 11:30:36 -- accel/accel.sh@20 -- # val= 00:07:06.217 11:30:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.217 11:30:36 -- accel/accel.sh@19 -- # IFS=: 00:07:06.217 11:30:36 -- accel/accel.sh@19 -- # read -r var val 00:07:06.217 11:30:36 -- accel/accel.sh@20 -- # val= 00:07:06.217 11:30:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.217 11:30:36 -- accel/accel.sh@19 -- # IFS=: 00:07:06.217 11:30:36 -- accel/accel.sh@19 -- # read -r var val 00:07:06.217 11:30:36 -- accel/accel.sh@20 -- # val= 00:07:06.217 11:30:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.217 11:30:36 -- accel/accel.sh@19 -- # IFS=: 00:07:06.217 11:30:36 -- accel/accel.sh@19 -- # read -r var val 00:07:06.217 11:30:36 -- accel/accel.sh@20 -- # val= 00:07:06.217 11:30:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.217 11:30:36 -- accel/accel.sh@19 -- # IFS=: 00:07:06.217 11:30:36 -- accel/accel.sh@19 -- # read -r var val 00:07:06.217 11:30:36 -- accel/accel.sh@20 -- # val= 00:07:06.217 11:30:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.217 11:30:36 -- accel/accel.sh@19 -- # IFS=: 00:07:06.217 11:30:36 -- accel/accel.sh@19 -- # read -r var val 00:07:06.217 11:30:36 -- accel/accel.sh@20 -- # val= 00:07:06.217 11:30:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.217 11:30:36 -- accel/accel.sh@19 -- # IFS=: 00:07:06.217 11:30:36 -- accel/accel.sh@19 -- # read -r var val 00:07:06.217 11:30:36 -- accel/accel.sh@20 -- # val= 00:07:06.217 11:30:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.217 11:30:36 -- accel/accel.sh@19 -- # IFS=: 00:07:06.217 11:30:36 -- accel/accel.sh@19 -- # read -r var val 00:07:06.217 11:30:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.217 11:30:36 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:06.217 11:30:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.217 00:07:06.217 real 0m1.423s 00:07:06.217 user 0m1.304s 00:07:06.217 sys 0m0.134s 00:07:06.217 11:30:36 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.217 11:30:36 -- common/autotest_common.sh@10 -- # set +x 00:07:06.217 ************************************ 00:07:06.217 END TEST accel_decomp_full_mthread 00:07:06.217 ************************************ 00:07:06.217 11:30:36 -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:06.217 11:30:36 -- accel/accel.sh@137 -- # build_accel_config 00:07:06.217 11:30:36 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:06.217 11:30:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.217 11:30:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.217 11:30:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.217 11:30:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.217 11:30:36 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:06.217 11:30:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.217 11:30:36 -- accel/accel.sh@40 -- # local IFS=, 00:07:06.217 11:30:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.217 11:30:36 -- accel/accel.sh@41 -- # jq -r . 00:07:06.217 11:30:36 -- common/autotest_common.sh@10 -- # set +x 00:07:06.217 ************************************ 00:07:06.217 START TEST accel_dif_functional_tests 00:07:06.217 ************************************ 00:07:06.217 11:30:36 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:06.217 [2024-05-15 11:30:36.808250] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:07:06.217 [2024-05-15 11:30:36.808293] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2906768 ] 00:07:06.217 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.217 [2024-05-15 11:30:36.875476] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.217 [2024-05-15 11:30:36.962471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.217 [2024-05-15 11:30:36.962559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.217 [2024-05-15 11:30:36.962562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.476 00:07:06.476 00:07:06.476 CUnit - A unit testing framework for C - Version 2.1-3 00:07:06.476 http://cunit.sourceforge.net/ 00:07:06.476 00:07:06.476 00:07:06.476 Suite: accel_dif 00:07:06.476 Test: verify: DIF generated, GUARD check ...passed 00:07:06.476 Test: verify: DIF generated, APPTAG check ...passed 00:07:06.476 Test: verify: DIF generated, REFTAG check ...passed 00:07:06.476 Test: verify: DIF not generated, GUARD check ...[2024-05-15 11:30:37.043079] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:06.476 passed 00:07:06.476 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 11:30:37.043137] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:06.476 passed 00:07:06.476 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 11:30:37.043158] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:06.476 passed 00:07:06.476 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:06.476 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 11:30:37.043205] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:06.476 passed 00:07:06.476 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:06.476 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:06.476 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:06.476 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 11:30:37.043328] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:06.476 passed 00:07:06.476 Test: verify copy: DIF generated, GUARD check ...passed 00:07:06.476 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:06.476 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:06.476 Test: verify copy: DIF not generated, GUARD check ...[2024-05-15 11:30:37.043443] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:06.476 passed 00:07:06.476 Test: verify copy: DIF not generated, APPTAG check ...[2024-05-15 11:30:37.043469] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:06.476 passed 00:07:06.476 Test: verify copy: DIF not generated, REFTAG check ...[2024-05-15 11:30:37.043494] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:06.476 passed 00:07:06.476 Test: generate copy: DIF generated, GUARD check ...passed 00:07:06.476 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:06.476 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:06.476 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:06.476 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:06.476 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:06.476 Test: generate copy: iovecs-len validate ...[2024-05-15 11:30:37.043669] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:06.476 passed 00:07:06.476 Test: generate copy: buffer alignment validate ...passed 00:07:06.476 00:07:06.476 Run Summary: Type Total Ran Passed Failed Inactive 00:07:06.476 suites 1 1 n/a 0 0 00:07:06.476 tests 26 26 26 0 0 00:07:06.476 asserts 115 115 115 0 n/a 00:07:06.476 00:07:06.476 Elapsed time = 0.002 seconds 00:07:06.735 00:07:06.735 real 0m0.469s 00:07:06.735 user 0m0.662s 00:07:06.735 sys 0m0.160s 00:07:06.735 11:30:37 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.735 11:30:37 -- common/autotest_common.sh@10 -- # set +x 00:07:06.735 ************************************ 00:07:06.735 END TEST accel_dif_functional_tests 00:07:06.735 ************************************ 00:07:06.735 00:07:06.735 real 0m33.345s 00:07:06.735 user 0m36.119s 00:07:06.735 sys 0m5.314s 00:07:06.735 11:30:37 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.735 11:30:37 -- common/autotest_common.sh@10 -- # set +x 00:07:06.735 ************************************ 00:07:06.735 END TEST accel 00:07:06.735 ************************************ 00:07:06.735 11:30:37 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:06.735 11:30:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:06.735 11:30:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.735 11:30:37 -- common/autotest_common.sh@10 -- # set +x 00:07:06.735 ************************************ 00:07:06.735 START TEST accel_rpc 00:07:06.735 ************************************ 00:07:06.735 11:30:37 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:06.735 * Looking for test storage... 00:07:06.735 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:06.735 11:30:37 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:06.735 11:30:37 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2906929 00:07:06.735 11:30:37 -- accel/accel_rpc.sh@15 -- # waitforlisten 2906929 00:07:06.735 11:30:37 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:06.735 11:30:37 -- common/autotest_common.sh@827 -- # '[' -z 2906929 ']' 00:07:06.735 11:30:37 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.735 11:30:37 -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:06.735 11:30:37 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.735 11:30:37 -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:06.735 11:30:37 -- common/autotest_common.sh@10 -- # set +x 00:07:06.994 [2024-05-15 11:30:37.518265] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:07:06.994 [2024-05-15 11:30:37.518335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2906929 ] 00:07:06.994 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.994 [2024-05-15 11:30:37.588507] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.994 [2024-05-15 11:30:37.674842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.931 11:30:38 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:07.931 11:30:38 -- common/autotest_common.sh@860 -- # return 0 00:07:07.931 11:30:38 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:07.931 11:30:38 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:07.931 11:30:38 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:07.931 11:30:38 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:07.931 11:30:38 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:07.931 11:30:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:07.931 11:30:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.931 11:30:38 -- common/autotest_common.sh@10 -- # set +x 00:07:07.931 ************************************ 00:07:07.931 START TEST accel_assign_opcode 00:07:07.931 ************************************ 00:07:07.931 11:30:38 -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:07.931 11:30:38 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:07.931 11:30:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.931 11:30:38 -- common/autotest_common.sh@10 -- # set +x 00:07:07.931 [2024-05-15 11:30:38.373041] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:07.931 11:30:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.931 11:30:38 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:07.931 11:30:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.931 11:30:38 -- common/autotest_common.sh@10 -- # set +x 00:07:07.931 [2024-05-15 11:30:38.381049] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:07.931 11:30:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.931 11:30:38 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:07.931 11:30:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.931 11:30:38 -- common/autotest_common.sh@10 -- # set +x 00:07:07.931 11:30:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.931 11:30:38 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:07.931 11:30:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.931 11:30:38 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:07.931 11:30:38 -- common/autotest_common.sh@10 -- # set +x 00:07:07.931 11:30:38 -- accel/accel_rpc.sh@42 -- # grep software 00:07:07.931 11:30:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.931 software 00:07:07.931 00:07:07.931 real 0m0.237s 00:07:07.931 user 0m0.048s 00:07:07.931 sys 0m0.014s 00:07:07.931 11:30:38 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.931 11:30:38 -- common/autotest_common.sh@10 -- # set +x 00:07:07.931 ************************************ 00:07:07.931 END TEST accel_assign_opcode 00:07:07.931 ************************************ 00:07:07.931 11:30:38 -- accel/accel_rpc.sh@55 -- # killprocess 2906929 00:07:07.931 11:30:38 -- common/autotest_common.sh@946 -- # '[' -z 2906929 ']' 00:07:07.931 11:30:38 -- common/autotest_common.sh@950 -- # kill -0 2906929 00:07:07.931 11:30:38 -- common/autotest_common.sh@951 -- # uname 00:07:07.931 11:30:38 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:07.931 11:30:38 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2906929 00:07:08.191 11:30:38 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:08.191 11:30:38 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:08.191 11:30:38 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2906929' 00:07:08.191 killing process with pid 2906929 00:07:08.191 11:30:38 -- common/autotest_common.sh@965 -- # kill 2906929 00:07:08.191 11:30:38 -- common/autotest_common.sh@970 -- # wait 2906929 00:07:08.450 00:07:08.450 real 0m1.713s 00:07:08.450 user 0m1.761s 00:07:08.450 sys 0m0.485s 00:07:08.450 11:30:39 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.450 11:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:08.450 ************************************ 00:07:08.450 END TEST accel_rpc 00:07:08.450 ************************************ 00:07:08.450 11:30:39 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:08.450 11:30:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:08.450 11:30:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.450 11:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:08.450 ************************************ 00:07:08.450 START TEST app_cmdline 00:07:08.450 ************************************ 00:07:08.450 11:30:39 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:08.709 * Looking for test storage... 00:07:08.709 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:08.709 11:30:39 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:08.709 11:30:39 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2907267 00:07:08.709 11:30:39 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:08.709 11:30:39 -- app/cmdline.sh@18 -- # waitforlisten 2907267 00:07:08.709 11:30:39 -- common/autotest_common.sh@827 -- # '[' -z 2907267 ']' 00:07:08.709 11:30:39 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.709 11:30:39 -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:08.709 11:30:39 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.709 11:30:39 -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:08.709 11:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:08.709 [2024-05-15 11:30:39.308584] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:07:08.709 [2024-05-15 11:30:39.308664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2907267 ] 00:07:08.709 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.709 [2024-05-15 11:30:39.379029] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.709 [2024-05-15 11:30:39.463136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.647 11:30:40 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:09.647 11:30:40 -- common/autotest_common.sh@860 -- # return 0 00:07:09.647 11:30:40 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:09.647 { 00:07:09.647 "version": "SPDK v24.05-pre git sha1 913aa023f", 00:07:09.647 "fields": { 00:07:09.647 "major": 24, 00:07:09.647 "minor": 5, 00:07:09.647 "patch": 0, 00:07:09.647 "suffix": "-pre", 00:07:09.647 "commit": "913aa023f" 00:07:09.647 } 00:07:09.647 } 00:07:09.647 11:30:40 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:09.647 11:30:40 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:09.647 11:30:40 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:09.647 11:30:40 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:09.647 11:30:40 -- app/cmdline.sh@26 -- # sort 00:07:09.647 11:30:40 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:09.647 11:30:40 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:09.647 11:30:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.647 11:30:40 -- common/autotest_common.sh@10 -- # set +x 00:07:09.647 11:30:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.647 11:30:40 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:09.647 11:30:40 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:09.647 11:30:40 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.647 11:30:40 -- common/autotest_common.sh@648 -- # local es=0 00:07:09.647 11:30:40 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.647 11:30:40 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:09.647 11:30:40 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.647 11:30:40 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:09.647 11:30:40 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.647 11:30:40 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:09.647 11:30:40 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.647 11:30:40 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:09.647 11:30:40 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:09.647 11:30:40 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.905 request: 00:07:09.905 { 00:07:09.905 "method": "env_dpdk_get_mem_stats", 00:07:09.905 "req_id": 1 00:07:09.905 } 00:07:09.905 Got JSON-RPC error response 00:07:09.905 response: 00:07:09.905 { 00:07:09.905 "code": -32601, 00:07:09.905 "message": "Method not found" 00:07:09.905 } 00:07:09.905 11:30:40 -- common/autotest_common.sh@651 -- # es=1 00:07:09.906 11:30:40 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:09.906 11:30:40 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:09.906 11:30:40 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:09.906 11:30:40 -- app/cmdline.sh@1 -- # killprocess 2907267 00:07:09.906 11:30:40 -- common/autotest_common.sh@946 -- # '[' -z 2907267 ']' 00:07:09.906 11:30:40 -- common/autotest_common.sh@950 -- # kill -0 2907267 00:07:09.906 11:30:40 -- common/autotest_common.sh@951 -- # uname 00:07:09.906 11:30:40 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:09.906 11:30:40 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2907267 00:07:09.906 11:30:40 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:09.906 11:30:40 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:09.906 11:30:40 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2907267' 00:07:09.906 killing process with pid 2907267 00:07:09.906 11:30:40 -- common/autotest_common.sh@965 -- # kill 2907267 00:07:09.906 11:30:40 -- common/autotest_common.sh@970 -- # wait 2907267 00:07:10.472 00:07:10.472 real 0m1.783s 00:07:10.472 user 0m2.087s 00:07:10.472 sys 0m0.494s 00:07:10.472 11:30:40 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.472 11:30:40 -- common/autotest_common.sh@10 -- # set +x 00:07:10.472 ************************************ 00:07:10.472 END TEST app_cmdline 00:07:10.472 ************************************ 00:07:10.472 11:30:40 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:10.472 11:30:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:10.472 11:30:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.472 11:30:40 -- common/autotest_common.sh@10 -- # set +x 00:07:10.472 ************************************ 00:07:10.472 START TEST version 00:07:10.472 ************************************ 00:07:10.472 11:30:41 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:10.472 * Looking for test storage... 00:07:10.472 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:10.472 11:30:41 -- app/version.sh@17 -- # get_header_version major 00:07:10.472 11:30:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:10.472 11:30:41 -- app/version.sh@14 -- # cut -f2 00:07:10.472 11:30:41 -- app/version.sh@14 -- # tr -d '"' 00:07:10.472 11:30:41 -- app/version.sh@17 -- # major=24 00:07:10.472 11:30:41 -- app/version.sh@18 -- # get_header_version minor 00:07:10.472 11:30:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:10.472 11:30:41 -- app/version.sh@14 -- # cut -f2 00:07:10.472 11:30:41 -- app/version.sh@14 -- # tr -d '"' 00:07:10.472 11:30:41 -- app/version.sh@18 -- # minor=5 00:07:10.472 11:30:41 -- app/version.sh@19 -- # get_header_version patch 00:07:10.472 11:30:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:10.472 11:30:41 -- app/version.sh@14 -- # tr -d '"' 00:07:10.472 11:30:41 -- app/version.sh@14 -- # cut -f2 00:07:10.472 11:30:41 -- app/version.sh@19 -- # patch=0 00:07:10.472 11:30:41 -- app/version.sh@20 -- # get_header_version suffix 00:07:10.472 11:30:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:10.472 11:30:41 -- app/version.sh@14 -- # cut -f2 00:07:10.472 11:30:41 -- app/version.sh@14 -- # tr -d '"' 00:07:10.472 11:30:41 -- app/version.sh@20 -- # suffix=-pre 00:07:10.472 11:30:41 -- app/version.sh@22 -- # version=24.5 00:07:10.472 11:30:41 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:10.472 11:30:41 -- app/version.sh@28 -- # version=24.5rc0 00:07:10.472 11:30:41 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:10.472 11:30:41 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:10.472 11:30:41 -- app/version.sh@30 -- # py_version=24.5rc0 00:07:10.472 11:30:41 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:10.472 00:07:10.472 real 0m0.183s 00:07:10.472 user 0m0.092s 00:07:10.472 sys 0m0.134s 00:07:10.472 11:30:41 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.472 11:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:10.472 ************************************ 00:07:10.472 END TEST version 00:07:10.472 ************************************ 00:07:10.731 11:30:41 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:10.731 11:30:41 -- spdk/autotest.sh@194 -- # uname -s 00:07:10.731 11:30:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:10.731 11:30:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:10.731 11:30:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:10.731 11:30:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:10.731 11:30:41 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:07:10.731 11:30:41 -- spdk/autotest.sh@258 -- # timing_exit lib 00:07:10.731 11:30:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:10.731 11:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:10.731 11:30:41 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:10.731 11:30:41 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:07:10.731 11:30:41 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:07:10.731 11:30:41 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:07:10.731 11:30:41 -- spdk/autotest.sh@281 -- # '[' rdma = rdma ']' 00:07:10.731 11:30:41 -- spdk/autotest.sh@282 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:10.731 11:30:41 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:10.731 11:30:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.731 11:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:10.731 ************************************ 00:07:10.731 START TEST nvmf_rdma 00:07:10.731 ************************************ 00:07:10.731 11:30:41 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:10.731 * Looking for test storage... 00:07:10.731 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:10.731 11:30:41 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:10.731 11:30:41 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:10.731 11:30:41 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.731 11:30:41 -- nvmf/common.sh@7 -- # uname -s 00:07:10.731 11:30:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.731 11:30:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.731 11:30:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.731 11:30:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.731 11:30:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.731 11:30:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.731 11:30:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.731 11:30:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.731 11:30:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.731 11:30:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.731 11:30:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:07:10.731 11:30:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:07:10.731 11:30:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.731 11:30:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.731 11:30:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.731 11:30:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.731 11:30:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:10.731 11:30:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.731 11:30:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.731 11:30:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.731 11:30:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.731 11:30:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.731 11:30:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.731 11:30:41 -- paths/export.sh@5 -- # export PATH 00:07:10.731 11:30:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.731 11:30:41 -- nvmf/common.sh@47 -- # : 0 00:07:10.731 11:30:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:10.731 11:30:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:10.731 11:30:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.731 11:30:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.731 11:30:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.731 11:30:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:10.731 11:30:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:10.731 11:30:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:10.731 11:30:41 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:10.731 11:30:41 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:10.731 11:30:41 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:10.731 11:30:41 -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:10.731 11:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:10.731 11:30:41 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:10.731 11:30:41 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:10.731 11:30:41 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:10.731 11:30:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.731 11:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:10.990 ************************************ 00:07:10.990 START TEST nvmf_example 00:07:10.990 ************************************ 00:07:10.990 11:30:41 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:10.990 * Looking for test storage... 00:07:10.990 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:10.990 11:30:41 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.990 11:30:41 -- nvmf/common.sh@7 -- # uname -s 00:07:10.990 11:30:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.990 11:30:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.990 11:30:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.990 11:30:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.990 11:30:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.990 11:30:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.990 11:30:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.990 11:30:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.990 11:30:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.990 11:30:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.990 11:30:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:07:10.990 11:30:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:07:10.990 11:30:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.990 11:30:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.990 11:30:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.990 11:30:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.990 11:30:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:10.990 11:30:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.990 11:30:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.990 11:30:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.990 11:30:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.990 11:30:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.990 11:30:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.990 11:30:41 -- paths/export.sh@5 -- # export PATH 00:07:10.990 11:30:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.990 11:30:41 -- nvmf/common.sh@47 -- # : 0 00:07:10.990 11:30:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:10.990 11:30:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:10.990 11:30:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.990 11:30:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.990 11:30:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.990 11:30:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:10.990 11:30:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:10.990 11:30:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:10.990 11:30:41 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:10.990 11:30:41 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:10.990 11:30:41 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:10.991 11:30:41 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:10.991 11:30:41 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:10.991 11:30:41 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:10.991 11:30:41 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:10.991 11:30:41 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:10.991 11:30:41 -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:10.991 11:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:10.991 11:30:41 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:10.991 11:30:41 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:10.991 11:30:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.991 11:30:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:10.991 11:30:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:10.991 11:30:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:10.991 11:30:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.991 11:30:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.991 11:30:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.991 11:30:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:10.991 11:30:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:10.991 11:30:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:10.991 11:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:17.563 11:30:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:17.563 11:30:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:17.563 11:30:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:17.563 11:30:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:17.563 11:30:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:17.563 11:30:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:17.563 11:30:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:17.563 11:30:47 -- nvmf/common.sh@295 -- # net_devs=() 00:07:17.563 11:30:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:17.563 11:30:47 -- nvmf/common.sh@296 -- # e810=() 00:07:17.563 11:30:47 -- nvmf/common.sh@296 -- # local -ga e810 00:07:17.563 11:30:47 -- nvmf/common.sh@297 -- # x722=() 00:07:17.563 11:30:47 -- nvmf/common.sh@297 -- # local -ga x722 00:07:17.563 11:30:47 -- nvmf/common.sh@298 -- # mlx=() 00:07:17.563 11:30:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:17.563 11:30:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.563 11:30:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.563 11:30:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.563 11:30:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.563 11:30:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.564 11:30:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.564 11:30:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.564 11:30:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.564 11:30:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.564 11:30:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.564 11:30:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.564 11:30:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:17.564 11:30:47 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:17.564 11:30:47 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:17.564 11:30:47 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:17.564 11:30:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:17.564 11:30:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.564 11:30:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:17.564 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:17.564 11:30:47 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:17.564 11:30:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.564 11:30:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:17.564 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:17.564 11:30:47 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:17.564 11:30:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:17.564 11:30:47 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.564 11:30:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.564 11:30:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:17.564 11:30:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.564 11:30:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:17.564 Found net devices under 0000:18:00.0: mlx_0_0 00:07:17.564 11:30:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.564 11:30:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.564 11:30:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.564 11:30:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:17.564 11:30:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.564 11:30:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:17.564 Found net devices under 0000:18:00.1: mlx_0_1 00:07:17.564 11:30:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.564 11:30:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:17.564 11:30:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:17.564 11:30:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:17.564 11:30:47 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:17.564 11:30:47 -- nvmf/common.sh@58 -- # uname 00:07:17.564 11:30:47 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:17.564 11:30:47 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:17.564 11:30:47 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:17.564 11:30:47 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:17.564 11:30:47 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:17.564 11:30:47 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:17.564 11:30:47 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:17.564 11:30:47 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:17.564 11:30:47 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:17.564 11:30:47 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:17.564 11:30:47 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:17.564 11:30:47 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:17.564 11:30:47 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:17.564 11:30:47 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:17.564 11:30:47 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:17.564 11:30:47 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:17.564 11:30:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:17.564 11:30:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.564 11:30:47 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:17.564 11:30:47 -- nvmf/common.sh@105 -- # continue 2 00:07:17.564 11:30:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:17.564 11:30:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.564 11:30:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.564 11:30:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:17.564 11:30:47 -- nvmf/common.sh@105 -- # continue 2 00:07:17.564 11:30:47 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:17.564 11:30:47 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:17.564 11:30:47 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:17.564 11:30:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:17.564 11:30:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:17.564 11:30:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:17.564 11:30:47 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:17.564 11:30:47 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:17.564 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:17.564 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:07:17.564 altname enp24s0f0np0 00:07:17.564 altname ens785f0np0 00:07:17.564 inet 192.168.100.8/24 scope global mlx_0_0 00:07:17.564 valid_lft forever preferred_lft forever 00:07:17.564 11:30:47 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:17.564 11:30:47 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:17.564 11:30:47 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:17.564 11:30:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:17.564 11:30:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:17.564 11:30:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:17.564 11:30:47 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:17.564 11:30:47 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:17.564 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:17.564 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:07:17.564 altname enp24s0f1np1 00:07:17.564 altname ens785f1np1 00:07:17.564 inet 192.168.100.9/24 scope global mlx_0_1 00:07:17.564 valid_lft forever preferred_lft forever 00:07:17.564 11:30:47 -- nvmf/common.sh@411 -- # return 0 00:07:17.564 11:30:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:17.564 11:30:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:17.564 11:30:47 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:17.564 11:30:47 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:17.564 11:30:47 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:17.564 11:30:47 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:17.564 11:30:47 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:17.564 11:30:47 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:17.564 11:30:47 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:17.564 11:30:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:17.564 11:30:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.564 11:30:47 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:17.564 11:30:47 -- nvmf/common.sh@105 -- # continue 2 00:07:17.564 11:30:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:17.564 11:30:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.564 11:30:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.564 11:30:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:17.564 11:30:47 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:17.564 11:30:47 -- nvmf/common.sh@105 -- # continue 2 00:07:17.564 11:30:47 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:17.564 11:30:47 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:17.564 11:30:47 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:17.564 11:30:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:17.564 11:30:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:17.564 11:30:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:17.564 11:30:47 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:17.564 11:30:47 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:17.564 11:30:47 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:17.564 11:30:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:17.564 11:30:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:17.564 11:30:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:17.564 11:30:47 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:17.564 192.168.100.9' 00:07:17.564 11:30:47 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:17.564 192.168.100.9' 00:07:17.564 11:30:47 -- nvmf/common.sh@446 -- # head -n 1 00:07:17.564 11:30:47 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:17.564 11:30:47 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:17.564 192.168.100.9' 00:07:17.564 11:30:47 -- nvmf/common.sh@447 -- # tail -n +2 00:07:17.564 11:30:47 -- nvmf/common.sh@447 -- # head -n 1 00:07:17.564 11:30:47 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:17.564 11:30:47 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:17.564 11:30:47 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:17.565 11:30:47 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:17.565 11:30:47 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:17.565 11:30:47 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:17.565 11:30:47 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:17.565 11:30:47 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:17.565 11:30:47 -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:17.565 11:30:47 -- common/autotest_common.sh@10 -- # set +x 00:07:17.565 11:30:47 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:17.565 11:30:47 -- target/nvmf_example.sh@34 -- # nvmfpid=2910472 00:07:17.565 11:30:47 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:17.565 11:30:47 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:17.565 11:30:47 -- target/nvmf_example.sh@36 -- # waitforlisten 2910472 00:07:17.565 11:30:47 -- common/autotest_common.sh@827 -- # '[' -z 2910472 ']' 00:07:17.565 11:30:47 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.565 11:30:47 -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:17.565 11:30:47 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.565 11:30:47 -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:17.565 11:30:47 -- common/autotest_common.sh@10 -- # set +x 00:07:17.565 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.143 11:30:48 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:18.143 11:30:48 -- common/autotest_common.sh@860 -- # return 0 00:07:18.143 11:30:48 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:18.143 11:30:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:18.143 11:30:48 -- common/autotest_common.sh@10 -- # set +x 00:07:18.143 11:30:48 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:18.143 11:30:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.143 11:30:48 -- common/autotest_common.sh@10 -- # set +x 00:07:18.405 11:30:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.405 11:30:48 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:18.405 11:30:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.405 11:30:48 -- common/autotest_common.sh@10 -- # set +x 00:07:18.405 11:30:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.405 11:30:49 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:18.405 11:30:49 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:18.405 11:30:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.405 11:30:49 -- common/autotest_common.sh@10 -- # set +x 00:07:18.405 11:30:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.405 11:30:49 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:18.405 11:30:49 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:18.405 11:30:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.405 11:30:49 -- common/autotest_common.sh@10 -- # set +x 00:07:18.405 11:30:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.405 11:30:49 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:18.405 11:30:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.405 11:30:49 -- common/autotest_common.sh@10 -- # set +x 00:07:18.405 11:30:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.405 11:30:49 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:18.405 11:30:49 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:18.405 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.667 Initializing NVMe Controllers 00:07:30.667 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:30.667 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:30.667 Initialization complete. Launching workers. 00:07:30.667 ======================================================== 00:07:30.667 Latency(us) 00:07:30.667 Device Information : IOPS MiB/s Average min max 00:07:30.667 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26017.71 101.63 2459.66 633.64 12177.57 00:07:30.667 ======================================================== 00:07:30.667 Total : 26017.71 101.63 2459.66 633.64 12177.57 00:07:30.667 00:07:30.667 11:31:00 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:30.667 11:31:00 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:30.667 11:31:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:30.667 11:31:00 -- nvmf/common.sh@117 -- # sync 00:07:30.667 11:31:00 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:30.667 11:31:00 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:30.667 11:31:00 -- nvmf/common.sh@120 -- # set +e 00:07:30.667 11:31:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:30.667 11:31:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:30.667 rmmod nvme_rdma 00:07:30.667 rmmod nvme_fabrics 00:07:30.667 11:31:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:30.667 11:31:00 -- nvmf/common.sh@124 -- # set -e 00:07:30.667 11:31:00 -- nvmf/common.sh@125 -- # return 0 00:07:30.667 11:31:00 -- nvmf/common.sh@478 -- # '[' -n 2910472 ']' 00:07:30.667 11:31:00 -- nvmf/common.sh@479 -- # killprocess 2910472 00:07:30.667 11:31:00 -- common/autotest_common.sh@946 -- # '[' -z 2910472 ']' 00:07:30.667 11:31:00 -- common/autotest_common.sh@950 -- # kill -0 2910472 00:07:30.667 11:31:00 -- common/autotest_common.sh@951 -- # uname 00:07:30.667 11:31:00 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:30.667 11:31:00 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2910472 00:07:30.667 11:31:00 -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:30.667 11:31:00 -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:30.667 11:31:00 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2910472' 00:07:30.667 killing process with pid 2910472 00:07:30.667 11:31:00 -- common/autotest_common.sh@965 -- # kill 2910472 00:07:30.667 11:31:00 -- common/autotest_common.sh@970 -- # wait 2910472 00:07:30.667 [2024-05-15 11:31:00.465154] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:07:30.667 nvmf threads initialize successfully 00:07:30.667 bdev subsystem init successfully 00:07:30.667 created a nvmf target service 00:07:30.667 create targets's poll groups done 00:07:30.667 all subsystems of target started 00:07:30.667 nvmf target is running 00:07:30.667 all subsystems of target stopped 00:07:30.667 destroy targets's poll groups done 00:07:30.667 destroyed the nvmf target service 00:07:30.667 bdev subsystem finish successfully 00:07:30.667 nvmf threads destroy successfully 00:07:30.667 11:31:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:30.667 11:31:00 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:07:30.667 11:31:00 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:30.667 11:31:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.667 11:31:00 -- common/autotest_common.sh@10 -- # set +x 00:07:30.667 00:07:30.667 real 0m19.202s 00:07:30.667 user 0m52.235s 00:07:30.667 sys 0m5.252s 00:07:30.667 11:31:00 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.667 11:31:00 -- common/autotest_common.sh@10 -- # set +x 00:07:30.667 ************************************ 00:07:30.667 END TEST nvmf_example 00:07:30.667 ************************************ 00:07:30.667 11:31:00 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:30.667 11:31:00 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:30.667 11:31:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.667 11:31:00 -- common/autotest_common.sh@10 -- # set +x 00:07:30.667 ************************************ 00:07:30.667 START TEST nvmf_filesystem 00:07:30.667 ************************************ 00:07:30.667 11:31:00 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:30.667 * Looking for test storage... 00:07:30.667 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:30.667 11:31:00 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:07:30.667 11:31:00 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:30.667 11:31:00 -- common/autotest_common.sh@34 -- # set -e 00:07:30.667 11:31:00 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:30.667 11:31:00 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:30.667 11:31:00 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:07:30.667 11:31:00 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:30.667 11:31:00 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:07:30.667 11:31:00 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:30.667 11:31:00 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:30.667 11:31:00 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:30.667 11:31:00 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:30.667 11:31:00 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:30.667 11:31:00 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:30.667 11:31:00 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:30.667 11:31:00 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:30.667 11:31:00 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:30.667 11:31:00 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:30.667 11:31:00 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:30.667 11:31:00 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:30.667 11:31:00 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:30.667 11:31:00 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:30.667 11:31:00 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:30.667 11:31:00 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:30.667 11:31:00 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:30.667 11:31:00 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:30.667 11:31:00 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:30.667 11:31:00 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:30.667 11:31:00 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:30.667 11:31:00 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:30.667 11:31:00 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:30.667 11:31:00 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:30.667 11:31:00 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:30.667 11:31:00 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:30.667 11:31:00 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:30.667 11:31:00 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:30.667 11:31:00 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:30.667 11:31:00 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:30.667 11:31:00 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:30.667 11:31:00 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:30.667 11:31:00 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:30.667 11:31:00 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:30.667 11:31:00 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:30.667 11:31:00 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:30.667 11:31:00 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:30.667 11:31:00 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:30.667 11:31:00 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:30.667 11:31:00 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:30.667 11:31:00 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:30.667 11:31:00 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:30.667 11:31:00 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:30.667 11:31:00 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:30.667 11:31:00 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:30.667 11:31:00 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:30.667 11:31:00 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:30.667 11:31:00 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:30.667 11:31:00 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:30.667 11:31:00 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:30.668 11:31:00 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:30.668 11:31:00 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:30.668 11:31:00 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:07:30.668 11:31:00 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:07:30.668 11:31:00 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:07:30.668 11:31:00 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:07:30.668 11:31:00 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:07:30.668 11:31:00 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:07:30.668 11:31:00 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:07:30.668 11:31:00 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:07:30.668 11:31:00 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:07:30.668 11:31:00 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:07:30.668 11:31:00 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:07:30.668 11:31:00 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:07:30.668 11:31:00 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:07:30.668 11:31:00 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:07:30.668 11:31:00 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:07:30.668 11:31:00 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:30.668 11:31:00 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:07:30.668 11:31:00 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:07:30.668 11:31:00 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:07:30.668 11:31:00 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:07:30.668 11:31:00 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:07:30.668 11:31:00 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:07:30.668 11:31:00 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:07:30.668 11:31:00 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:07:30.668 11:31:00 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:07:30.668 11:31:00 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:07:30.668 11:31:00 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:07:30.668 11:31:00 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:30.668 11:31:00 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:07:30.668 11:31:00 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:07:30.668 11:31:00 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:30.668 11:31:00 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:30.668 11:31:00 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:30.668 11:31:00 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:30.668 11:31:00 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:30.668 11:31:00 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:30.668 11:31:00 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:30.668 11:31:00 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:30.668 11:31:00 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:30.668 11:31:00 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:30.668 11:31:00 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:30.668 11:31:00 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:30.668 11:31:00 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:30.668 11:31:00 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:30.668 11:31:00 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:07:30.668 11:31:00 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:30.668 #define SPDK_CONFIG_H 00:07:30.668 #define SPDK_CONFIG_APPS 1 00:07:30.668 #define SPDK_CONFIG_ARCH native 00:07:30.668 #undef SPDK_CONFIG_ASAN 00:07:30.668 #undef SPDK_CONFIG_AVAHI 00:07:30.668 #undef SPDK_CONFIG_CET 00:07:30.668 #define SPDK_CONFIG_COVERAGE 1 00:07:30.668 #define SPDK_CONFIG_CROSS_PREFIX 00:07:30.668 #undef SPDK_CONFIG_CRYPTO 00:07:30.668 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:30.668 #undef SPDK_CONFIG_CUSTOMOCF 00:07:30.668 #undef SPDK_CONFIG_DAOS 00:07:30.668 #define SPDK_CONFIG_DAOS_DIR 00:07:30.668 #define SPDK_CONFIG_DEBUG 1 00:07:30.668 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:30.668 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:30.668 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:30.668 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:30.668 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:30.668 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:30.668 #define SPDK_CONFIG_EXAMPLES 1 00:07:30.668 #undef SPDK_CONFIG_FC 00:07:30.668 #define SPDK_CONFIG_FC_PATH 00:07:30.668 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:30.668 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:30.668 #undef SPDK_CONFIG_FUSE 00:07:30.668 #undef SPDK_CONFIG_FUZZER 00:07:30.668 #define SPDK_CONFIG_FUZZER_LIB 00:07:30.668 #undef SPDK_CONFIG_GOLANG 00:07:30.668 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:30.668 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:30.668 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:30.668 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:30.668 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:30.668 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:30.668 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:30.668 #define SPDK_CONFIG_IDXD 1 00:07:30.668 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:30.668 #undef SPDK_CONFIG_IPSEC_MB 00:07:30.668 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:30.668 #define SPDK_CONFIG_ISAL 1 00:07:30.668 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:30.668 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:30.668 #define SPDK_CONFIG_LIBDIR 00:07:30.668 #undef SPDK_CONFIG_LTO 00:07:30.668 #define SPDK_CONFIG_MAX_LCORES 00:07:30.668 #define SPDK_CONFIG_NVME_CUSE 1 00:07:30.668 #undef SPDK_CONFIG_OCF 00:07:30.668 #define SPDK_CONFIG_OCF_PATH 00:07:30.668 #define SPDK_CONFIG_OPENSSL_PATH 00:07:30.668 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:30.668 #define SPDK_CONFIG_PGO_DIR 00:07:30.668 #undef SPDK_CONFIG_PGO_USE 00:07:30.668 #define SPDK_CONFIG_PREFIX /usr/local 00:07:30.668 #undef SPDK_CONFIG_RAID5F 00:07:30.668 #undef SPDK_CONFIG_RBD 00:07:30.668 #define SPDK_CONFIG_RDMA 1 00:07:30.668 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:30.668 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:30.668 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:30.668 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:30.668 #define SPDK_CONFIG_SHARED 1 00:07:30.668 #undef SPDK_CONFIG_SMA 00:07:30.668 #define SPDK_CONFIG_TESTS 1 00:07:30.668 #undef SPDK_CONFIG_TSAN 00:07:30.668 #define SPDK_CONFIG_UBLK 1 00:07:30.668 #define SPDK_CONFIG_UBSAN 1 00:07:30.668 #undef SPDK_CONFIG_UNIT_TESTS 00:07:30.668 #undef SPDK_CONFIG_URING 00:07:30.668 #define SPDK_CONFIG_URING_PATH 00:07:30.668 #undef SPDK_CONFIG_URING_ZNS 00:07:30.668 #undef SPDK_CONFIG_USDT 00:07:30.668 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:30.668 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:30.668 #undef SPDK_CONFIG_VFIO_USER 00:07:30.668 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:30.668 #define SPDK_CONFIG_VHOST 1 00:07:30.668 #define SPDK_CONFIG_VIRTIO 1 00:07:30.668 #undef SPDK_CONFIG_VTUNE 00:07:30.668 #define SPDK_CONFIG_VTUNE_DIR 00:07:30.668 #define SPDK_CONFIG_WERROR 1 00:07:30.668 #define SPDK_CONFIG_WPDK_DIR 00:07:30.668 #undef SPDK_CONFIG_XNVME 00:07:30.668 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:30.668 11:31:00 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:30.668 11:31:00 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:30.668 11:31:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.668 11:31:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.668 11:31:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.668 11:31:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.668 11:31:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.668 11:31:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.668 11:31:00 -- paths/export.sh@5 -- # export PATH 00:07:30.668 11:31:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.668 11:31:00 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:30.668 11:31:00 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:30.668 11:31:00 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:30.668 11:31:00 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:30.668 11:31:00 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:30.668 11:31:00 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:30.668 11:31:00 -- pm/common@64 -- # TEST_TAG=N/A 00:07:30.668 11:31:00 -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:07:30.668 11:31:00 -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:07:30.668 11:31:00 -- pm/common@68 -- # uname -s 00:07:30.668 11:31:00 -- pm/common@68 -- # PM_OS=Linux 00:07:30.668 11:31:00 -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:30.668 11:31:00 -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:30.669 11:31:00 -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:30.669 11:31:00 -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:30.669 11:31:00 -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:30.669 11:31:00 -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:30.669 11:31:00 -- pm/common@76 -- # SUDO[0]= 00:07:30.669 11:31:00 -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:30.669 11:31:00 -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:30.669 11:31:00 -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:30.669 11:31:00 -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:30.669 11:31:00 -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:30.669 11:31:00 -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:30.669 11:31:00 -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:30.669 11:31:00 -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:30.669 11:31:00 -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:07:30.669 11:31:00 -- common/autotest_common.sh@57 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:30.669 11:31:00 -- common/autotest_common.sh@61 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:30.669 11:31:00 -- common/autotest_common.sh@63 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:30.669 11:31:00 -- common/autotest_common.sh@65 -- # : 1 00:07:30.669 11:31:00 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:30.669 11:31:00 -- common/autotest_common.sh@67 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:30.669 11:31:00 -- common/autotest_common.sh@69 -- # : 00:07:30.669 11:31:00 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:30.669 11:31:00 -- common/autotest_common.sh@71 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:30.669 11:31:00 -- common/autotest_common.sh@73 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:30.669 11:31:00 -- common/autotest_common.sh@75 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:30.669 11:31:00 -- common/autotest_common.sh@77 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:30.669 11:31:00 -- common/autotest_common.sh@79 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:30.669 11:31:00 -- common/autotest_common.sh@81 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:30.669 11:31:00 -- common/autotest_common.sh@83 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:30.669 11:31:00 -- common/autotest_common.sh@85 -- # : 1 00:07:30.669 11:31:00 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:30.669 11:31:00 -- common/autotest_common.sh@87 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:30.669 11:31:00 -- common/autotest_common.sh@89 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:30.669 11:31:00 -- common/autotest_common.sh@91 -- # : 1 00:07:30.669 11:31:00 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:30.669 11:31:00 -- common/autotest_common.sh@93 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:30.669 11:31:00 -- common/autotest_common.sh@95 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:30.669 11:31:00 -- common/autotest_common.sh@97 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:30.669 11:31:00 -- common/autotest_common.sh@99 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:30.669 11:31:00 -- common/autotest_common.sh@101 -- # : rdma 00:07:30.669 11:31:00 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:30.669 11:31:00 -- common/autotest_common.sh@103 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:30.669 11:31:00 -- common/autotest_common.sh@105 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:30.669 11:31:00 -- common/autotest_common.sh@107 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:30.669 11:31:00 -- common/autotest_common.sh@109 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:30.669 11:31:00 -- common/autotest_common.sh@111 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:30.669 11:31:00 -- common/autotest_common.sh@113 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:30.669 11:31:00 -- common/autotest_common.sh@115 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:30.669 11:31:00 -- common/autotest_common.sh@117 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:30.669 11:31:00 -- common/autotest_common.sh@119 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:30.669 11:31:00 -- common/autotest_common.sh@121 -- # : 1 00:07:30.669 11:31:00 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:30.669 11:31:00 -- common/autotest_common.sh@123 -- # : 00:07:30.669 11:31:00 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:30.669 11:31:00 -- common/autotest_common.sh@125 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:30.669 11:31:00 -- common/autotest_common.sh@127 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:30.669 11:31:00 -- common/autotest_common.sh@129 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:30.669 11:31:00 -- common/autotest_common.sh@131 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:30.669 11:31:00 -- common/autotest_common.sh@133 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:30.669 11:31:00 -- common/autotest_common.sh@135 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:30.669 11:31:00 -- common/autotest_common.sh@137 -- # : 00:07:30.669 11:31:00 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:30.669 11:31:00 -- common/autotest_common.sh@139 -- # : true 00:07:30.669 11:31:00 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:30.669 11:31:00 -- common/autotest_common.sh@141 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:30.669 11:31:00 -- common/autotest_common.sh@143 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:30.669 11:31:00 -- common/autotest_common.sh@145 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:30.669 11:31:00 -- common/autotest_common.sh@147 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:30.669 11:31:00 -- common/autotest_common.sh@149 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:30.669 11:31:00 -- common/autotest_common.sh@151 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:30.669 11:31:00 -- common/autotest_common.sh@153 -- # : mlx5 00:07:30.669 11:31:00 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:30.669 11:31:00 -- common/autotest_common.sh@155 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:30.669 11:31:00 -- common/autotest_common.sh@157 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:30.669 11:31:00 -- common/autotest_common.sh@159 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:30.669 11:31:00 -- common/autotest_common.sh@161 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:30.669 11:31:00 -- common/autotest_common.sh@163 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:30.669 11:31:00 -- common/autotest_common.sh@166 -- # : 00:07:30.669 11:31:00 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:30.669 11:31:00 -- common/autotest_common.sh@168 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:30.669 11:31:00 -- common/autotest_common.sh@170 -- # : 0 00:07:30.669 11:31:00 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:30.669 11:31:00 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:30.669 11:31:00 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:30.669 11:31:00 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:30.669 11:31:00 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:30.669 11:31:00 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:30.669 11:31:00 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:30.669 11:31:00 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:30.669 11:31:00 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:30.669 11:31:00 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:30.669 11:31:00 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:30.670 11:31:00 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:30.670 11:31:00 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:30.670 11:31:00 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:30.670 11:31:00 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:30.670 11:31:00 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:30.670 11:31:00 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:30.670 11:31:00 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:30.670 11:31:00 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:30.670 11:31:00 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:30.670 11:31:00 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:30.670 11:31:00 -- common/autotest_common.sh@199 -- # cat 00:07:30.670 11:31:00 -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:30.670 11:31:00 -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:30.670 11:31:00 -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:30.670 11:31:00 -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:30.670 11:31:00 -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:30.670 11:31:00 -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:30.670 11:31:00 -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:30.670 11:31:00 -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:30.670 11:31:00 -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:30.670 11:31:00 -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:30.670 11:31:00 -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:30.670 11:31:00 -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:30.670 11:31:00 -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:30.670 11:31:00 -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:30.670 11:31:00 -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:30.670 11:31:00 -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:30.670 11:31:00 -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:30.670 11:31:00 -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:30.670 11:31:00 -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:30.670 11:31:00 -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:30.670 11:31:00 -- common/autotest_common.sh@262 -- # export valgrind= 00:07:30.670 11:31:00 -- common/autotest_common.sh@262 -- # valgrind= 00:07:30.670 11:31:00 -- common/autotest_common.sh@268 -- # uname -s 00:07:30.670 11:31:00 -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:30.670 11:31:00 -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:30.670 11:31:00 -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:30.670 11:31:00 -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:30.670 11:31:00 -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:30.670 11:31:00 -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:30.670 11:31:00 -- common/autotest_common.sh@278 -- # MAKE=make 00:07:30.670 11:31:00 -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j72 00:07:30.670 11:31:00 -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:30.670 11:31:00 -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:30.670 11:31:00 -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:30.670 11:31:00 -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:30.670 11:31:00 -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:30.670 11:31:00 -- common/autotest_common.sh@300 -- # case "$i" in 00:07:30.670 11:31:00 -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=rdma 00:07:30.670 11:31:00 -- common/autotest_common.sh@317 -- # [[ -z 2912333 ]] 00:07:30.670 11:31:00 -- common/autotest_common.sh@317 -- # kill -0 2912333 00:07:30.670 11:31:00 -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:30.670 11:31:00 -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:30.670 11:31:00 -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:30.670 11:31:00 -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:30.670 11:31:00 -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:30.670 11:31:00 -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:30.670 11:31:00 -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:30.670 11:31:00 -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:30.670 11:31:01 -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.pr7t47 00:07:30.670 11:31:01 -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:30.670 11:31:01 -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:30.670 11:31:01 -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:30.670 11:31:01 -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.pr7t47/tests/target /tmp/spdk.pr7t47 00:07:30.670 11:31:01 -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:30.670 11:31:01 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:30.670 11:31:01 -- common/autotest_common.sh@326 -- # df -T 00:07:30.670 11:31:01 -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:30.670 11:31:01 -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:30.670 11:31:01 -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:30.670 11:31:01 -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:30.670 11:31:01 -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:30.670 11:31:01 -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:30.670 11:31:01 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:30.670 11:31:01 -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:30.670 11:31:01 -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:30.670 11:31:01 -- common/autotest_common.sh@361 -- # avails["$mount"]=966955008 00:07:30.670 11:31:01 -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:30.670 11:31:01 -- common/autotest_common.sh@362 -- # uses["$mount"]=4317474816 00:07:30.670 11:31:01 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:30.670 11:31:01 -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:30.670 11:31:01 -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:30.670 11:31:01 -- common/autotest_common.sh@361 -- # avails["$mount"]=84902006784 00:07:30.670 11:31:01 -- common/autotest_common.sh@361 -- # sizes["$mount"]=94508605440 00:07:30.670 11:31:01 -- common/autotest_common.sh@362 -- # uses["$mount"]=9606598656 00:07:30.670 11:31:01 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:30.670 11:31:01 -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:30.670 11:31:01 -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:30.670 11:31:01 -- common/autotest_common.sh@361 -- # avails["$mount"]=47250927616 00:07:30.670 11:31:01 -- common/autotest_common.sh@361 -- # sizes["$mount"]=47254302720 00:07:30.670 11:31:01 -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:07:30.670 11:31:01 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:30.670 11:31:01 -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:30.670 11:31:01 -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:30.670 11:31:01 -- common/autotest_common.sh@361 -- # avails["$mount"]=18892554240 00:07:30.670 11:31:01 -- common/autotest_common.sh@361 -- # sizes["$mount"]=18901721088 00:07:30.670 11:31:01 -- common/autotest_common.sh@362 -- # uses["$mount"]=9166848 00:07:30.670 11:31:01 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:30.670 11:31:01 -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:30.670 11:31:01 -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:30.670 11:31:01 -- common/autotest_common.sh@361 -- # avails["$mount"]=47253884928 00:07:30.670 11:31:01 -- common/autotest_common.sh@361 -- # sizes["$mount"]=47254302720 00:07:30.670 11:31:01 -- common/autotest_common.sh@362 -- # uses["$mount"]=417792 00:07:30.670 11:31:01 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:30.670 11:31:01 -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:30.670 11:31:01 -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:30.670 11:31:01 -- common/autotest_common.sh@361 -- # avails["$mount"]=9450856448 00:07:30.670 11:31:01 -- common/autotest_common.sh@361 -- # sizes["$mount"]=9450860544 00:07:30.670 11:31:01 -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:30.670 11:31:01 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:30.670 11:31:01 -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:30.670 * Looking for test storage... 00:07:30.670 11:31:01 -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:30.670 11:31:01 -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:30.670 11:31:01 -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:30.670 11:31:01 -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:30.670 11:31:01 -- common/autotest_common.sh@371 -- # mount=/ 00:07:30.670 11:31:01 -- common/autotest_common.sh@373 -- # target_space=84902006784 00:07:30.670 11:31:01 -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:30.670 11:31:01 -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:30.670 11:31:01 -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:30.670 11:31:01 -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:30.670 11:31:01 -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:30.670 11:31:01 -- common/autotest_common.sh@380 -- # new_size=11821191168 00:07:30.670 11:31:01 -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:30.670 11:31:01 -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:30.670 11:31:01 -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:30.670 11:31:01 -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:30.671 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:30.671 11:31:01 -- common/autotest_common.sh@388 -- # return 0 00:07:30.671 11:31:01 -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:30.671 11:31:01 -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:30.671 11:31:01 -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:30.671 11:31:01 -- common/autotest_common.sh@1682 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:30.671 11:31:01 -- common/autotest_common.sh@1683 -- # true 00:07:30.671 11:31:01 -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:30.671 11:31:01 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:30.671 11:31:01 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:30.671 11:31:01 -- common/autotest_common.sh@27 -- # exec 00:07:30.671 11:31:01 -- common/autotest_common.sh@29 -- # exec 00:07:30.671 11:31:01 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:30.671 11:31:01 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:30.671 11:31:01 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:30.671 11:31:01 -- common/autotest_common.sh@18 -- # set -x 00:07:30.671 11:31:01 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.671 11:31:01 -- nvmf/common.sh@7 -- # uname -s 00:07:30.671 11:31:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.671 11:31:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.671 11:31:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.671 11:31:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.671 11:31:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.671 11:31:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.671 11:31:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.671 11:31:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.671 11:31:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.671 11:31:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.671 11:31:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:07:30.671 11:31:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:07:30.671 11:31:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.671 11:31:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.671 11:31:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.671 11:31:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.671 11:31:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:30.671 11:31:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.671 11:31:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.671 11:31:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.671 11:31:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.671 11:31:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.671 11:31:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.671 11:31:01 -- paths/export.sh@5 -- # export PATH 00:07:30.671 11:31:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.671 11:31:01 -- nvmf/common.sh@47 -- # : 0 00:07:30.671 11:31:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.671 11:31:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.671 11:31:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.671 11:31:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.671 11:31:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.671 11:31:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.671 11:31:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.671 11:31:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.671 11:31:01 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:30.671 11:31:01 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:30.671 11:31:01 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:30.671 11:31:01 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:30.671 11:31:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.671 11:31:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:30.671 11:31:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:30.671 11:31:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:30.671 11:31:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.671 11:31:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.671 11:31:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.671 11:31:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:30.671 11:31:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:30.671 11:31:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:30.671 11:31:01 -- common/autotest_common.sh@10 -- # set +x 00:07:35.950 11:31:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:35.950 11:31:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:35.950 11:31:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:35.950 11:31:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:35.950 11:31:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:35.950 11:31:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:35.950 11:31:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:35.950 11:31:06 -- nvmf/common.sh@295 -- # net_devs=() 00:07:35.950 11:31:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:35.950 11:31:06 -- nvmf/common.sh@296 -- # e810=() 00:07:35.950 11:31:06 -- nvmf/common.sh@296 -- # local -ga e810 00:07:35.950 11:31:06 -- nvmf/common.sh@297 -- # x722=() 00:07:35.950 11:31:06 -- nvmf/common.sh@297 -- # local -ga x722 00:07:35.950 11:31:06 -- nvmf/common.sh@298 -- # mlx=() 00:07:35.950 11:31:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:35.950 11:31:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.950 11:31:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.950 11:31:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.950 11:31:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.950 11:31:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.950 11:31:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.950 11:31:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.950 11:31:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.950 11:31:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.950 11:31:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.951 11:31:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.951 11:31:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:35.951 11:31:06 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:35.951 11:31:06 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:35.951 11:31:06 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:35.951 11:31:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:35.951 11:31:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.951 11:31:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:35.951 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:35.951 11:31:06 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:35.951 11:31:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.951 11:31:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:35.951 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:35.951 11:31:06 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:35.951 11:31:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:35.951 11:31:06 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.951 11:31:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.951 11:31:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:35.951 11:31:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.951 11:31:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:35.951 Found net devices under 0000:18:00.0: mlx_0_0 00:07:35.951 11:31:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.951 11:31:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.951 11:31:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.951 11:31:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:35.951 11:31:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.951 11:31:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:35.951 Found net devices under 0000:18:00.1: mlx_0_1 00:07:35.951 11:31:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.951 11:31:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:35.951 11:31:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:35.951 11:31:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:35.951 11:31:06 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:35.951 11:31:06 -- nvmf/common.sh@58 -- # uname 00:07:35.951 11:31:06 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:35.951 11:31:06 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:35.951 11:31:06 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:35.951 11:31:06 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:35.951 11:31:06 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:35.951 11:31:06 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:35.951 11:31:06 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:35.951 11:31:06 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:35.951 11:31:06 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:35.951 11:31:06 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:35.951 11:31:06 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:35.951 11:31:06 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:35.951 11:31:06 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:35.951 11:31:06 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:35.951 11:31:06 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:35.951 11:31:06 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:35.951 11:31:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:35.951 11:31:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:35.951 11:31:06 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:35.951 11:31:06 -- nvmf/common.sh@105 -- # continue 2 00:07:35.951 11:31:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:35.951 11:31:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:35.951 11:31:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:35.951 11:31:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:35.951 11:31:06 -- nvmf/common.sh@105 -- # continue 2 00:07:35.951 11:31:06 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:35.951 11:31:06 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:35.951 11:31:06 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:35.951 11:31:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:35.951 11:31:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:35.951 11:31:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:35.951 11:31:06 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:35.951 11:31:06 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:35.951 11:31:06 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:35.951 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:35.951 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:07:35.951 altname enp24s0f0np0 00:07:35.951 altname ens785f0np0 00:07:35.951 inet 192.168.100.8/24 scope global mlx_0_0 00:07:35.951 valid_lft forever preferred_lft forever 00:07:36.211 11:31:06 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:36.211 11:31:06 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:36.211 11:31:06 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:36.211 11:31:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:36.211 11:31:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:36.211 11:31:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:36.211 11:31:06 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:36.211 11:31:06 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:36.211 11:31:06 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:36.211 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:36.211 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:07:36.211 altname enp24s0f1np1 00:07:36.211 altname ens785f1np1 00:07:36.211 inet 192.168.100.9/24 scope global mlx_0_1 00:07:36.211 valid_lft forever preferred_lft forever 00:07:36.211 11:31:06 -- nvmf/common.sh@411 -- # return 0 00:07:36.211 11:31:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:36.211 11:31:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:36.211 11:31:06 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:36.211 11:31:06 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:36.211 11:31:06 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:36.211 11:31:06 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:36.211 11:31:06 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:36.211 11:31:06 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:36.211 11:31:06 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:36.211 11:31:06 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:36.211 11:31:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:36.211 11:31:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.211 11:31:06 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:36.211 11:31:06 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:36.211 11:31:06 -- nvmf/common.sh@105 -- # continue 2 00:07:36.211 11:31:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:36.211 11:31:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.211 11:31:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:36.211 11:31:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.211 11:31:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:36.211 11:31:06 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:36.211 11:31:06 -- nvmf/common.sh@105 -- # continue 2 00:07:36.211 11:31:06 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:36.211 11:31:06 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:36.211 11:31:06 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:36.211 11:31:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:36.211 11:31:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:36.211 11:31:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:36.211 11:31:06 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:36.211 11:31:06 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:36.211 11:31:06 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:36.211 11:31:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:36.211 11:31:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:36.211 11:31:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:36.211 11:31:06 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:36.211 192.168.100.9' 00:07:36.211 11:31:06 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:36.211 192.168.100.9' 00:07:36.211 11:31:06 -- nvmf/common.sh@446 -- # head -n 1 00:07:36.211 11:31:06 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:36.211 11:31:06 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:36.211 192.168.100.9' 00:07:36.211 11:31:06 -- nvmf/common.sh@447 -- # head -n 1 00:07:36.211 11:31:06 -- nvmf/common.sh@447 -- # tail -n +2 00:07:36.211 11:31:06 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:36.211 11:31:06 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:36.211 11:31:06 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:36.211 11:31:06 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:36.211 11:31:06 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:36.211 11:31:06 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:36.211 11:31:06 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:36.211 11:31:06 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:36.211 11:31:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.211 11:31:06 -- common/autotest_common.sh@10 -- # set +x 00:07:36.211 ************************************ 00:07:36.211 START TEST nvmf_filesystem_no_in_capsule 00:07:36.211 ************************************ 00:07:36.211 11:31:06 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:36.211 11:31:06 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:36.211 11:31:06 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:36.211 11:31:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:36.211 11:31:06 -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:36.211 11:31:06 -- common/autotest_common.sh@10 -- # set +x 00:07:36.211 11:31:06 -- nvmf/common.sh@470 -- # nvmfpid=2915079 00:07:36.211 11:31:06 -- nvmf/common.sh@471 -- # waitforlisten 2915079 00:07:36.211 11:31:06 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:36.211 11:31:06 -- common/autotest_common.sh@827 -- # '[' -z 2915079 ']' 00:07:36.211 11:31:06 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.211 11:31:06 -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:36.211 11:31:06 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.211 11:31:06 -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:36.211 11:31:06 -- common/autotest_common.sh@10 -- # set +x 00:07:36.211 [2024-05-15 11:31:06.952081] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:07:36.211 [2024-05-15 11:31:06.952134] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.470 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.470 [2024-05-15 11:31:07.027211] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.470 [2024-05-15 11:31:07.119266] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.470 [2024-05-15 11:31:07.119315] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.470 [2024-05-15 11:31:07.119324] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.470 [2024-05-15 11:31:07.119333] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.470 [2024-05-15 11:31:07.119340] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.470 [2024-05-15 11:31:07.119429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.470 [2024-05-15 11:31:07.119516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.470 [2024-05-15 11:31:07.119593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.470 [2024-05-15 11:31:07.119594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.038 11:31:07 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:37.038 11:31:07 -- common/autotest_common.sh@860 -- # return 0 00:07:37.038 11:31:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:37.038 11:31:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.038 11:31:07 -- common/autotest_common.sh@10 -- # set +x 00:07:37.297 11:31:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.297 11:31:07 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:37.297 11:31:07 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:37.297 11:31:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.297 11:31:07 -- common/autotest_common.sh@10 -- # set +x 00:07:37.297 [2024-05-15 11:31:07.818173] rdma.c:2712:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:37.297 [2024-05-15 11:31:07.840164] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd2bf00/0xd303f0) succeed. 00:07:37.297 [2024-05-15 11:31:07.850790] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd2d540/0xd71a80) succeed. 00:07:37.297 11:31:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.297 11:31:07 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:37.297 11:31:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.297 11:31:07 -- common/autotest_common.sh@10 -- # set +x 00:07:37.556 Malloc1 00:07:37.556 11:31:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.556 11:31:08 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:37.556 11:31:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.556 11:31:08 -- common/autotest_common.sh@10 -- # set +x 00:07:37.556 11:31:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.556 11:31:08 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:37.556 11:31:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.556 11:31:08 -- common/autotest_common.sh@10 -- # set +x 00:07:37.556 11:31:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.556 11:31:08 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:37.556 11:31:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.556 11:31:08 -- common/autotest_common.sh@10 -- # set +x 00:07:37.556 [2024-05-15 11:31:08.119156] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:37.556 [2024-05-15 11:31:08.119547] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:37.556 11:31:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.556 11:31:08 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:37.556 11:31:08 -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:37.556 11:31:08 -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:37.556 11:31:08 -- common/autotest_common.sh@1376 -- # local bs 00:07:37.556 11:31:08 -- common/autotest_common.sh@1377 -- # local nb 00:07:37.556 11:31:08 -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:37.556 11:31:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.556 11:31:08 -- common/autotest_common.sh@10 -- # set +x 00:07:37.556 11:31:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.556 11:31:08 -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:37.556 { 00:07:37.556 "name": "Malloc1", 00:07:37.556 "aliases": [ 00:07:37.556 "ec1ca259-baa6-4d36-9294-cf6608b89cf1" 00:07:37.556 ], 00:07:37.556 "product_name": "Malloc disk", 00:07:37.556 "block_size": 512, 00:07:37.556 "num_blocks": 1048576, 00:07:37.557 "uuid": "ec1ca259-baa6-4d36-9294-cf6608b89cf1", 00:07:37.557 "assigned_rate_limits": { 00:07:37.557 "rw_ios_per_sec": 0, 00:07:37.557 "rw_mbytes_per_sec": 0, 00:07:37.557 "r_mbytes_per_sec": 0, 00:07:37.557 "w_mbytes_per_sec": 0 00:07:37.557 }, 00:07:37.557 "claimed": true, 00:07:37.557 "claim_type": "exclusive_write", 00:07:37.557 "zoned": false, 00:07:37.557 "supported_io_types": { 00:07:37.557 "read": true, 00:07:37.557 "write": true, 00:07:37.557 "unmap": true, 00:07:37.557 "write_zeroes": true, 00:07:37.557 "flush": true, 00:07:37.557 "reset": true, 00:07:37.557 "compare": false, 00:07:37.557 "compare_and_write": false, 00:07:37.557 "abort": true, 00:07:37.557 "nvme_admin": false, 00:07:37.557 "nvme_io": false 00:07:37.557 }, 00:07:37.557 "memory_domains": [ 00:07:37.557 { 00:07:37.557 "dma_device_id": "system", 00:07:37.557 "dma_device_type": 1 00:07:37.557 }, 00:07:37.557 { 00:07:37.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.557 "dma_device_type": 2 00:07:37.557 } 00:07:37.557 ], 00:07:37.557 "driver_specific": {} 00:07:37.557 } 00:07:37.557 ]' 00:07:37.557 11:31:08 -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:37.557 11:31:08 -- common/autotest_common.sh@1379 -- # bs=512 00:07:37.557 11:31:08 -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:37.557 11:31:08 -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:37.557 11:31:08 -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:37.557 11:31:08 -- common/autotest_common.sh@1384 -- # echo 512 00:07:37.557 11:31:08 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:37.557 11:31:08 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:38.494 11:31:09 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:38.494 11:31:09 -- common/autotest_common.sh@1194 -- # local i=0 00:07:38.494 11:31:09 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:38.494 11:31:09 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:38.494 11:31:09 -- common/autotest_common.sh@1201 -- # sleep 2 00:07:41.026 11:31:11 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:41.026 11:31:11 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:41.026 11:31:11 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:41.026 11:31:11 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:41.026 11:31:11 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:41.026 11:31:11 -- common/autotest_common.sh@1204 -- # return 0 00:07:41.026 11:31:11 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:41.026 11:31:11 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:41.026 11:31:11 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:41.026 11:31:11 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:41.026 11:31:11 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:41.026 11:31:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:41.026 11:31:11 -- setup/common.sh@80 -- # echo 536870912 00:07:41.026 11:31:11 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:41.026 11:31:11 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:41.026 11:31:11 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:41.026 11:31:11 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:41.026 11:31:11 -- target/filesystem.sh@69 -- # partprobe 00:07:41.026 11:31:11 -- target/filesystem.sh@70 -- # sleep 1 00:07:41.961 11:31:12 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:41.961 11:31:12 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:41.961 11:31:12 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:41.961 11:31:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.961 11:31:12 -- common/autotest_common.sh@10 -- # set +x 00:07:41.961 ************************************ 00:07:41.961 START TEST filesystem_ext4 00:07:41.961 ************************************ 00:07:41.961 11:31:12 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:41.961 11:31:12 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:41.961 11:31:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.961 11:31:12 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:41.961 11:31:12 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:41.961 11:31:12 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:41.961 11:31:12 -- common/autotest_common.sh@924 -- # local i=0 00:07:41.961 11:31:12 -- common/autotest_common.sh@925 -- # local force 00:07:41.961 11:31:12 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:41.961 11:31:12 -- common/autotest_common.sh@928 -- # force=-F 00:07:41.961 11:31:12 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:41.961 mke2fs 1.46.5 (30-Dec-2021) 00:07:41.961 Discarding device blocks: 0/522240 done 00:07:41.961 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:41.961 Filesystem UUID: bdf950fb-0e28-4297-ad0c-77a1b4206cdb 00:07:41.961 Superblock backups stored on blocks: 00:07:41.961 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:41.961 00:07:41.961 Allocating group tables: 0/64 done 00:07:41.961 Writing inode tables: 0/64 done 00:07:41.961 Creating journal (8192 blocks): done 00:07:41.961 Writing superblocks and filesystem accounting information: 0/64 done 00:07:41.961 00:07:41.961 11:31:12 -- common/autotest_common.sh@941 -- # return 0 00:07:41.961 11:31:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:41.961 11:31:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.961 11:31:12 -- target/filesystem.sh@25 -- # sync 00:07:41.961 11:31:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.961 11:31:12 -- target/filesystem.sh@27 -- # sync 00:07:41.961 11:31:12 -- target/filesystem.sh@29 -- # i=0 00:07:41.961 11:31:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.961 11:31:12 -- target/filesystem.sh@37 -- # kill -0 2915079 00:07:41.961 11:31:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.961 11:31:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.961 11:31:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.961 11:31:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.961 00:07:41.961 real 0m0.198s 00:07:41.961 user 0m0.038s 00:07:41.961 sys 0m0.066s 00:07:41.961 11:31:12 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.961 11:31:12 -- common/autotest_common.sh@10 -- # set +x 00:07:41.961 ************************************ 00:07:41.961 END TEST filesystem_ext4 00:07:41.961 ************************************ 00:07:42.220 11:31:12 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:42.220 11:31:12 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:42.220 11:31:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.220 11:31:12 -- common/autotest_common.sh@10 -- # set +x 00:07:42.220 ************************************ 00:07:42.220 START TEST filesystem_btrfs 00:07:42.220 ************************************ 00:07:42.220 11:31:12 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:42.220 11:31:12 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:42.220 11:31:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:42.220 11:31:12 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:42.220 11:31:12 -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:42.220 11:31:12 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:42.220 11:31:12 -- common/autotest_common.sh@924 -- # local i=0 00:07:42.220 11:31:12 -- common/autotest_common.sh@925 -- # local force 00:07:42.220 11:31:12 -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:42.220 11:31:12 -- common/autotest_common.sh@930 -- # force=-f 00:07:42.220 11:31:12 -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:42.220 btrfs-progs v6.6.2 00:07:42.220 See https://btrfs.readthedocs.io for more information. 00:07:42.220 00:07:42.220 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:42.220 NOTE: several default settings have changed in version 5.15, please make sure 00:07:42.220 this does not affect your deployments: 00:07:42.220 - DUP for metadata (-m dup) 00:07:42.220 - enabled no-holes (-O no-holes) 00:07:42.220 - enabled free-space-tree (-R free-space-tree) 00:07:42.220 00:07:42.220 Label: (null) 00:07:42.220 UUID: 75f00e71-2df5-4b7c-a586-6924ad2d89c9 00:07:42.220 Node size: 16384 00:07:42.220 Sector size: 4096 00:07:42.220 Filesystem size: 510.00MiB 00:07:42.220 Block group profiles: 00:07:42.220 Data: single 8.00MiB 00:07:42.220 Metadata: DUP 32.00MiB 00:07:42.220 System: DUP 8.00MiB 00:07:42.220 SSD detected: yes 00:07:42.220 Zoned device: no 00:07:42.220 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:42.220 Runtime features: free-space-tree 00:07:42.220 Checksum: crc32c 00:07:42.220 Number of devices: 1 00:07:42.220 Devices: 00:07:42.220 ID SIZE PATH 00:07:42.220 1 510.00MiB /dev/nvme0n1p1 00:07:42.220 00:07:42.220 11:31:12 -- common/autotest_common.sh@941 -- # return 0 00:07:42.220 11:31:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.487 11:31:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.487 11:31:12 -- target/filesystem.sh@25 -- # sync 00:07:42.487 11:31:13 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.487 11:31:13 -- target/filesystem.sh@27 -- # sync 00:07:42.487 11:31:13 -- target/filesystem.sh@29 -- # i=0 00:07:42.487 11:31:13 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.487 11:31:13 -- target/filesystem.sh@37 -- # kill -0 2915079 00:07:42.487 11:31:13 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.487 11:31:13 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.487 11:31:13 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.487 11:31:13 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.487 00:07:42.487 real 0m0.264s 00:07:42.487 user 0m0.033s 00:07:42.487 sys 0m0.138s 00:07:42.487 11:31:13 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.487 11:31:13 -- common/autotest_common.sh@10 -- # set +x 00:07:42.487 ************************************ 00:07:42.487 END TEST filesystem_btrfs 00:07:42.487 ************************************ 00:07:42.487 11:31:13 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:42.487 11:31:13 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:42.487 11:31:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.487 11:31:13 -- common/autotest_common.sh@10 -- # set +x 00:07:42.487 ************************************ 00:07:42.487 START TEST filesystem_xfs 00:07:42.487 ************************************ 00:07:42.487 11:31:13 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:42.487 11:31:13 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:42.487 11:31:13 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:42.487 11:31:13 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:42.487 11:31:13 -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:42.487 11:31:13 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:42.487 11:31:13 -- common/autotest_common.sh@924 -- # local i=0 00:07:42.487 11:31:13 -- common/autotest_common.sh@925 -- # local force 00:07:42.487 11:31:13 -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:42.487 11:31:13 -- common/autotest_common.sh@930 -- # force=-f 00:07:42.487 11:31:13 -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:42.750 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:42.750 = sectsz=512 attr=2, projid32bit=1 00:07:42.750 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:42.750 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:42.750 data = bsize=4096 blocks=130560, imaxpct=25 00:07:42.750 = sunit=0 swidth=0 blks 00:07:42.750 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:42.750 log =internal log bsize=4096 blocks=16384, version=2 00:07:42.750 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:42.750 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:42.750 Discarding blocks...Done. 00:07:42.750 11:31:13 -- common/autotest_common.sh@941 -- # return 0 00:07:42.750 11:31:13 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.750 11:31:13 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.750 11:31:13 -- target/filesystem.sh@25 -- # sync 00:07:42.750 11:31:13 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.750 11:31:13 -- target/filesystem.sh@27 -- # sync 00:07:42.750 11:31:13 -- target/filesystem.sh@29 -- # i=0 00:07:42.750 11:31:13 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.750 11:31:13 -- target/filesystem.sh@37 -- # kill -0 2915079 00:07:42.750 11:31:13 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.750 11:31:13 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.750 11:31:13 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.750 11:31:13 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.750 00:07:42.750 real 0m0.223s 00:07:42.750 user 0m0.031s 00:07:42.750 sys 0m0.074s 00:07:42.750 11:31:13 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.750 11:31:13 -- common/autotest_common.sh@10 -- # set +x 00:07:42.750 ************************************ 00:07:42.750 END TEST filesystem_xfs 00:07:42.750 ************************************ 00:07:42.750 11:31:13 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:42.750 11:31:13 -- target/filesystem.sh@93 -- # sync 00:07:42.750 11:31:13 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:43.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:43.687 11:31:14 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:43.687 11:31:14 -- common/autotest_common.sh@1215 -- # local i=0 00:07:43.687 11:31:14 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:43.687 11:31:14 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:43.687 11:31:14 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:43.687 11:31:14 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:43.946 11:31:14 -- common/autotest_common.sh@1227 -- # return 0 00:07:43.946 11:31:14 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.946 11:31:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.946 11:31:14 -- common/autotest_common.sh@10 -- # set +x 00:07:43.946 11:31:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.946 11:31:14 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:43.946 11:31:14 -- target/filesystem.sh@101 -- # killprocess 2915079 00:07:43.946 11:31:14 -- common/autotest_common.sh@946 -- # '[' -z 2915079 ']' 00:07:43.946 11:31:14 -- common/autotest_common.sh@950 -- # kill -0 2915079 00:07:43.946 11:31:14 -- common/autotest_common.sh@951 -- # uname 00:07:43.946 11:31:14 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:43.946 11:31:14 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2915079 00:07:43.946 11:31:14 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:43.946 11:31:14 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:43.946 11:31:14 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2915079' 00:07:43.946 killing process with pid 2915079 00:07:43.946 11:31:14 -- common/autotest_common.sh@965 -- # kill 2915079 00:07:43.946 [2024-05-15 11:31:14.512162] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:43.946 11:31:14 -- common/autotest_common.sh@970 -- # wait 2915079 00:07:43.946 [2024-05-15 11:31:14.565953] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:07:44.515 11:31:14 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:44.515 00:07:44.515 real 0m8.081s 00:07:44.515 user 0m31.349s 00:07:44.515 sys 0m1.248s 00:07:44.515 11:31:14 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.515 11:31:14 -- common/autotest_common.sh@10 -- # set +x 00:07:44.515 ************************************ 00:07:44.515 END TEST nvmf_filesystem_no_in_capsule 00:07:44.515 ************************************ 00:07:44.515 11:31:15 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:44.515 11:31:15 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:44.515 11:31:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.515 11:31:15 -- common/autotest_common.sh@10 -- # set +x 00:07:44.515 ************************************ 00:07:44.515 START TEST nvmf_filesystem_in_capsule 00:07:44.515 ************************************ 00:07:44.515 11:31:15 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:44.515 11:31:15 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:44.515 11:31:15 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:44.515 11:31:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:44.515 11:31:15 -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:44.515 11:31:15 -- common/autotest_common.sh@10 -- # set +x 00:07:44.515 11:31:15 -- nvmf/common.sh@470 -- # nvmfpid=2916387 00:07:44.515 11:31:15 -- nvmf/common.sh@471 -- # waitforlisten 2916387 00:07:44.515 11:31:15 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:44.515 11:31:15 -- common/autotest_common.sh@827 -- # '[' -z 2916387 ']' 00:07:44.515 11:31:15 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.515 11:31:15 -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:44.515 11:31:15 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.515 11:31:15 -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:44.515 11:31:15 -- common/autotest_common.sh@10 -- # set +x 00:07:44.515 [2024-05-15 11:31:15.131881] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:07:44.515 [2024-05-15 11:31:15.131937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.515 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.515 [2024-05-15 11:31:15.203473] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.775 [2024-05-15 11:31:15.290548] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.775 [2024-05-15 11:31:15.290593] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.775 [2024-05-15 11:31:15.290602] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.775 [2024-05-15 11:31:15.290611] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.775 [2024-05-15 11:31:15.290618] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.775 [2024-05-15 11:31:15.290677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.775 [2024-05-15 11:31:15.290764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.775 [2024-05-15 11:31:15.290841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.775 [2024-05-15 11:31:15.290842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.344 11:31:15 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:45.344 11:31:15 -- common/autotest_common.sh@860 -- # return 0 00:07:45.344 11:31:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:45.344 11:31:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:45.344 11:31:15 -- common/autotest_common.sh@10 -- # set +x 00:07:45.344 11:31:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.344 11:31:16 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:45.344 11:31:16 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:07:45.344 11:31:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.344 11:31:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.344 [2024-05-15 11:31:16.031502] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ddbf00/0x1de03f0) succeed. 00:07:45.344 [2024-05-15 11:31:16.042284] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ddd540/0x1e21a80) succeed. 00:07:45.605 11:31:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.605 11:31:16 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:45.605 11:31:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.605 11:31:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.605 Malloc1 00:07:45.605 11:31:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.605 11:31:16 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:45.605 11:31:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.605 11:31:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.605 11:31:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.605 11:31:16 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:45.605 11:31:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.605 11:31:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.605 11:31:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.605 11:31:16 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:45.605 11:31:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.605 11:31:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.605 [2024-05-15 11:31:16.329415] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:45.605 [2024-05-15 11:31:16.329852] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:45.605 11:31:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.605 11:31:16 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:45.605 11:31:16 -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:45.605 11:31:16 -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:45.605 11:31:16 -- common/autotest_common.sh@1376 -- # local bs 00:07:45.605 11:31:16 -- common/autotest_common.sh@1377 -- # local nb 00:07:45.605 11:31:16 -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:45.605 11:31:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.605 11:31:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.605 11:31:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.605 11:31:16 -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:45.605 { 00:07:45.605 "name": "Malloc1", 00:07:45.605 "aliases": [ 00:07:45.605 "6c7335f1-bcfa-4a2c-be93-f1cc051bb22b" 00:07:45.605 ], 00:07:45.605 "product_name": "Malloc disk", 00:07:45.605 "block_size": 512, 00:07:45.605 "num_blocks": 1048576, 00:07:45.605 "uuid": "6c7335f1-bcfa-4a2c-be93-f1cc051bb22b", 00:07:45.605 "assigned_rate_limits": { 00:07:45.605 "rw_ios_per_sec": 0, 00:07:45.605 "rw_mbytes_per_sec": 0, 00:07:45.605 "r_mbytes_per_sec": 0, 00:07:45.605 "w_mbytes_per_sec": 0 00:07:45.605 }, 00:07:45.605 "claimed": true, 00:07:45.605 "claim_type": "exclusive_write", 00:07:45.605 "zoned": false, 00:07:45.605 "supported_io_types": { 00:07:45.605 "read": true, 00:07:45.605 "write": true, 00:07:45.605 "unmap": true, 00:07:45.605 "write_zeroes": true, 00:07:45.605 "flush": true, 00:07:45.605 "reset": true, 00:07:45.605 "compare": false, 00:07:45.605 "compare_and_write": false, 00:07:45.605 "abort": true, 00:07:45.605 "nvme_admin": false, 00:07:45.605 "nvme_io": false 00:07:45.605 }, 00:07:45.605 "memory_domains": [ 00:07:45.605 { 00:07:45.605 "dma_device_id": "system", 00:07:45.605 "dma_device_type": 1 00:07:45.605 }, 00:07:45.605 { 00:07:45.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.605 "dma_device_type": 2 00:07:45.605 } 00:07:45.605 ], 00:07:45.605 "driver_specific": {} 00:07:45.605 } 00:07:45.605 ]' 00:07:45.605 11:31:16 -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:45.866 11:31:16 -- common/autotest_common.sh@1379 -- # bs=512 00:07:45.866 11:31:16 -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:45.866 11:31:16 -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:45.866 11:31:16 -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:45.866 11:31:16 -- common/autotest_common.sh@1384 -- # echo 512 00:07:45.866 11:31:16 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:45.866 11:31:16 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:46.805 11:31:17 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:46.805 11:31:17 -- common/autotest_common.sh@1194 -- # local i=0 00:07:46.805 11:31:17 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:46.805 11:31:17 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:46.805 11:31:17 -- common/autotest_common.sh@1201 -- # sleep 2 00:07:48.711 11:31:19 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:48.711 11:31:19 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:48.711 11:31:19 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:48.711 11:31:19 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:48.711 11:31:19 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:48.711 11:31:19 -- common/autotest_common.sh@1204 -- # return 0 00:07:48.711 11:31:19 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:48.711 11:31:19 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:48.711 11:31:19 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:48.711 11:31:19 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:48.711 11:31:19 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:48.711 11:31:19 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:48.711 11:31:19 -- setup/common.sh@80 -- # echo 536870912 00:07:48.711 11:31:19 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:48.711 11:31:19 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:48.971 11:31:19 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:48.971 11:31:19 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:48.971 11:31:19 -- target/filesystem.sh@69 -- # partprobe 00:07:48.971 11:31:19 -- target/filesystem.sh@70 -- # sleep 1 00:07:49.908 11:31:20 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:49.908 11:31:20 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:49.908 11:31:20 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:49.908 11:31:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.908 11:31:20 -- common/autotest_common.sh@10 -- # set +x 00:07:50.167 ************************************ 00:07:50.167 START TEST filesystem_in_capsule_ext4 00:07:50.167 ************************************ 00:07:50.167 11:31:20 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:50.167 11:31:20 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:50.167 11:31:20 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:50.167 11:31:20 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:50.167 11:31:20 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:50.167 11:31:20 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:50.167 11:31:20 -- common/autotest_common.sh@924 -- # local i=0 00:07:50.167 11:31:20 -- common/autotest_common.sh@925 -- # local force 00:07:50.167 11:31:20 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:50.167 11:31:20 -- common/autotest_common.sh@928 -- # force=-F 00:07:50.167 11:31:20 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:50.167 mke2fs 1.46.5 (30-Dec-2021) 00:07:50.167 Discarding device blocks: 0/522240 done 00:07:50.167 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:50.167 Filesystem UUID: 4cdc6175-2a33-41ac-8d4c-c4222409a347 00:07:50.167 Superblock backups stored on blocks: 00:07:50.167 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:50.167 00:07:50.167 Allocating group tables: 0/64 done 00:07:50.167 Writing inode tables: 0/64 done 00:07:50.167 Creating journal (8192 blocks): done 00:07:50.167 Writing superblocks and filesystem accounting information: 0/64 done 00:07:50.167 00:07:50.167 11:31:20 -- common/autotest_common.sh@941 -- # return 0 00:07:50.167 11:31:20 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.167 11:31:20 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.167 11:31:20 -- target/filesystem.sh@25 -- # sync 00:07:50.167 11:31:20 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.167 11:31:20 -- target/filesystem.sh@27 -- # sync 00:07:50.167 11:31:20 -- target/filesystem.sh@29 -- # i=0 00:07:50.167 11:31:20 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.167 11:31:20 -- target/filesystem.sh@37 -- # kill -0 2916387 00:07:50.167 11:31:20 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.167 11:31:20 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.167 11:31:20 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.167 11:31:20 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.167 00:07:50.167 real 0m0.185s 00:07:50.167 user 0m0.025s 00:07:50.167 sys 0m0.069s 00:07:50.167 11:31:20 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.167 11:31:20 -- common/autotest_common.sh@10 -- # set +x 00:07:50.167 ************************************ 00:07:50.167 END TEST filesystem_in_capsule_ext4 00:07:50.167 ************************************ 00:07:50.167 11:31:20 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:50.167 11:31:20 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:50.167 11:31:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.167 11:31:20 -- common/autotest_common.sh@10 -- # set +x 00:07:50.425 ************************************ 00:07:50.425 START TEST filesystem_in_capsule_btrfs 00:07:50.425 ************************************ 00:07:50.425 11:31:20 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:50.425 11:31:20 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:50.425 11:31:20 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:50.425 11:31:20 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:50.425 11:31:20 -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:50.425 11:31:20 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:50.425 11:31:20 -- common/autotest_common.sh@924 -- # local i=0 00:07:50.425 11:31:20 -- common/autotest_common.sh@925 -- # local force 00:07:50.425 11:31:20 -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:50.425 11:31:20 -- common/autotest_common.sh@930 -- # force=-f 00:07:50.425 11:31:20 -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:50.425 btrfs-progs v6.6.2 00:07:50.425 See https://btrfs.readthedocs.io for more information. 00:07:50.425 00:07:50.425 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:50.425 NOTE: several default settings have changed in version 5.15, please make sure 00:07:50.425 this does not affect your deployments: 00:07:50.425 - DUP for metadata (-m dup) 00:07:50.425 - enabled no-holes (-O no-holes) 00:07:50.425 - enabled free-space-tree (-R free-space-tree) 00:07:50.425 00:07:50.425 Label: (null) 00:07:50.425 UUID: 50ab79dc-1105-4aa1-b938-ac120b0b0c54 00:07:50.425 Node size: 16384 00:07:50.425 Sector size: 4096 00:07:50.425 Filesystem size: 510.00MiB 00:07:50.425 Block group profiles: 00:07:50.425 Data: single 8.00MiB 00:07:50.425 Metadata: DUP 32.00MiB 00:07:50.425 System: DUP 8.00MiB 00:07:50.425 SSD detected: yes 00:07:50.425 Zoned device: no 00:07:50.425 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:50.425 Runtime features: free-space-tree 00:07:50.425 Checksum: crc32c 00:07:50.425 Number of devices: 1 00:07:50.425 Devices: 00:07:50.425 ID SIZE PATH 00:07:50.425 1 510.00MiB /dev/nvme0n1p1 00:07:50.425 00:07:50.425 11:31:21 -- common/autotest_common.sh@941 -- # return 0 00:07:50.425 11:31:21 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.425 11:31:21 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.425 11:31:21 -- target/filesystem.sh@25 -- # sync 00:07:50.425 11:31:21 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.425 11:31:21 -- target/filesystem.sh@27 -- # sync 00:07:50.425 11:31:21 -- target/filesystem.sh@29 -- # i=0 00:07:50.425 11:31:21 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.683 11:31:21 -- target/filesystem.sh@37 -- # kill -0 2916387 00:07:50.683 11:31:21 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.683 11:31:21 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.683 11:31:21 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.683 11:31:21 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.683 00:07:50.683 real 0m0.273s 00:07:50.683 user 0m0.030s 00:07:50.683 sys 0m0.135s 00:07:50.683 11:31:21 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.683 11:31:21 -- common/autotest_common.sh@10 -- # set +x 00:07:50.683 ************************************ 00:07:50.683 END TEST filesystem_in_capsule_btrfs 00:07:50.683 ************************************ 00:07:50.683 11:31:21 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:50.683 11:31:21 -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:50.683 11:31:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.683 11:31:21 -- common/autotest_common.sh@10 -- # set +x 00:07:50.683 ************************************ 00:07:50.683 START TEST filesystem_in_capsule_xfs 00:07:50.683 ************************************ 00:07:50.683 11:31:21 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:50.683 11:31:21 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:50.683 11:31:21 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:50.683 11:31:21 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:50.683 11:31:21 -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:50.683 11:31:21 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:50.683 11:31:21 -- common/autotest_common.sh@924 -- # local i=0 00:07:50.683 11:31:21 -- common/autotest_common.sh@925 -- # local force 00:07:50.683 11:31:21 -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:50.683 11:31:21 -- common/autotest_common.sh@930 -- # force=-f 00:07:50.683 11:31:21 -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:50.683 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:50.683 = sectsz=512 attr=2, projid32bit=1 00:07:50.683 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:50.683 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:50.683 data = bsize=4096 blocks=130560, imaxpct=25 00:07:50.683 = sunit=0 swidth=0 blks 00:07:50.683 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:50.683 log =internal log bsize=4096 blocks=16384, version=2 00:07:50.683 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:50.683 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:50.684 Discarding blocks...Done. 00:07:50.684 11:31:21 -- common/autotest_common.sh@941 -- # return 0 00:07:50.684 11:31:21 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.942 11:31:21 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.942 11:31:21 -- target/filesystem.sh@25 -- # sync 00:07:50.942 11:31:21 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.942 11:31:21 -- target/filesystem.sh@27 -- # sync 00:07:50.942 11:31:21 -- target/filesystem.sh@29 -- # i=0 00:07:50.942 11:31:21 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.942 11:31:21 -- target/filesystem.sh@37 -- # kill -0 2916387 00:07:50.942 11:31:21 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.942 11:31:21 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.942 11:31:21 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.942 11:31:21 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.942 00:07:50.942 real 0m0.209s 00:07:50.942 user 0m0.025s 00:07:50.942 sys 0m0.085s 00:07:50.942 11:31:21 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.942 11:31:21 -- common/autotest_common.sh@10 -- # set +x 00:07:50.942 ************************************ 00:07:50.942 END TEST filesystem_in_capsule_xfs 00:07:50.942 ************************************ 00:07:50.942 11:31:21 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:50.942 11:31:21 -- target/filesystem.sh@93 -- # sync 00:07:50.942 11:31:21 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:51.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.879 11:31:22 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:51.879 11:31:22 -- common/autotest_common.sh@1215 -- # local i=0 00:07:51.879 11:31:22 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:51.879 11:31:22 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.879 11:31:22 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:51.879 11:31:22 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.879 11:31:22 -- common/autotest_common.sh@1227 -- # return 0 00:07:51.879 11:31:22 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.879 11:31:22 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.879 11:31:22 -- common/autotest_common.sh@10 -- # set +x 00:07:51.879 11:31:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.879 11:31:22 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:51.879 11:31:22 -- target/filesystem.sh@101 -- # killprocess 2916387 00:07:51.879 11:31:22 -- common/autotest_common.sh@946 -- # '[' -z 2916387 ']' 00:07:51.879 11:31:22 -- common/autotest_common.sh@950 -- # kill -0 2916387 00:07:51.879 11:31:22 -- common/autotest_common.sh@951 -- # uname 00:07:51.879 11:31:22 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:51.879 11:31:22 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2916387 00:07:52.138 11:31:22 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:52.138 11:31:22 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:52.138 11:31:22 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2916387' 00:07:52.138 killing process with pid 2916387 00:07:52.138 11:31:22 -- common/autotest_common.sh@965 -- # kill 2916387 00:07:52.138 [2024-05-15 11:31:22.653855] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:52.138 11:31:22 -- common/autotest_common.sh@970 -- # wait 2916387 00:07:52.138 [2024-05-15 11:31:22.746529] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:07:52.398 11:31:23 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:52.398 00:07:52.398 real 0m8.064s 00:07:52.398 user 0m31.225s 00:07:52.398 sys 0m1.275s 00:07:52.398 11:31:23 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.398 11:31:23 -- common/autotest_common.sh@10 -- # set +x 00:07:52.398 ************************************ 00:07:52.398 END TEST nvmf_filesystem_in_capsule 00:07:52.398 ************************************ 00:07:52.656 11:31:23 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:52.657 11:31:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:52.657 11:31:23 -- nvmf/common.sh@117 -- # sync 00:07:52.657 11:31:23 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:52.657 11:31:23 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:52.657 11:31:23 -- nvmf/common.sh@120 -- # set +e 00:07:52.657 11:31:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:52.657 11:31:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:52.657 rmmod nvme_rdma 00:07:52.657 rmmod nvme_fabrics 00:07:52.657 11:31:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:52.657 11:31:23 -- nvmf/common.sh@124 -- # set -e 00:07:52.657 11:31:23 -- nvmf/common.sh@125 -- # return 0 00:07:52.657 11:31:23 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:52.657 11:31:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:52.657 11:31:23 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:07:52.657 00:07:52.657 real 0m22.412s 00:07:52.657 user 1m4.339s 00:07:52.657 sys 0m7.218s 00:07:52.657 11:31:23 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.657 11:31:23 -- common/autotest_common.sh@10 -- # set +x 00:07:52.657 ************************************ 00:07:52.657 END TEST nvmf_filesystem 00:07:52.657 ************************************ 00:07:52.657 11:31:23 -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:52.657 11:31:23 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:52.657 11:31:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.657 11:31:23 -- common/autotest_common.sh@10 -- # set +x 00:07:52.657 ************************************ 00:07:52.657 START TEST nvmf_target_discovery 00:07:52.657 ************************************ 00:07:52.657 11:31:23 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:52.657 * Looking for test storage... 00:07:52.916 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:52.916 11:31:23 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.916 11:31:23 -- nvmf/common.sh@7 -- # uname -s 00:07:52.916 11:31:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.916 11:31:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.916 11:31:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.916 11:31:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.916 11:31:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.916 11:31:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.916 11:31:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.916 11:31:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.916 11:31:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.916 11:31:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.916 11:31:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:07:52.916 11:31:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:07:52.916 11:31:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.916 11:31:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.916 11:31:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.916 11:31:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.916 11:31:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:52.916 11:31:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.916 11:31:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.916 11:31:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.916 11:31:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.916 11:31:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.916 11:31:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.916 11:31:23 -- paths/export.sh@5 -- # export PATH 00:07:52.916 11:31:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.916 11:31:23 -- nvmf/common.sh@47 -- # : 0 00:07:52.916 11:31:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:52.916 11:31:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:52.916 11:31:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.916 11:31:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.916 11:31:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.916 11:31:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:52.916 11:31:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:52.916 11:31:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:52.916 11:31:23 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:52.916 11:31:23 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:52.916 11:31:23 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:52.916 11:31:23 -- target/discovery.sh@15 -- # hash nvme 00:07:52.916 11:31:23 -- target/discovery.sh@20 -- # nvmftestinit 00:07:52.916 11:31:23 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:52.916 11:31:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.916 11:31:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:52.916 11:31:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:52.916 11:31:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:52.916 11:31:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.916 11:31:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.916 11:31:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.916 11:31:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:52.916 11:31:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:52.916 11:31:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:52.916 11:31:23 -- common/autotest_common.sh@10 -- # set +x 00:07:59.488 11:31:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:59.488 11:31:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:59.488 11:31:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:59.488 11:31:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:59.488 11:31:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:59.488 11:31:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:59.488 11:31:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:59.488 11:31:29 -- nvmf/common.sh@295 -- # net_devs=() 00:07:59.488 11:31:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:59.488 11:31:29 -- nvmf/common.sh@296 -- # e810=() 00:07:59.488 11:31:29 -- nvmf/common.sh@296 -- # local -ga e810 00:07:59.488 11:31:29 -- nvmf/common.sh@297 -- # x722=() 00:07:59.488 11:31:29 -- nvmf/common.sh@297 -- # local -ga x722 00:07:59.488 11:31:29 -- nvmf/common.sh@298 -- # mlx=() 00:07:59.488 11:31:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:59.488 11:31:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.488 11:31:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.488 11:31:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.488 11:31:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.488 11:31:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.488 11:31:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.488 11:31:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.488 11:31:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.488 11:31:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.488 11:31:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.488 11:31:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.488 11:31:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:59.488 11:31:29 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:59.488 11:31:29 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:59.488 11:31:29 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:59.488 11:31:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:59.488 11:31:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.488 11:31:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:59.488 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:59.488 11:31:29 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:59.488 11:31:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.488 11:31:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:59.488 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:59.488 11:31:29 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:59.488 11:31:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:59.488 11:31:29 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.488 11:31:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.488 11:31:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:59.488 11:31:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.488 11:31:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:59.488 Found net devices under 0000:18:00.0: mlx_0_0 00:07:59.488 11:31:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.488 11:31:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.488 11:31:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.488 11:31:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:59.488 11:31:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.488 11:31:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:59.488 Found net devices under 0000:18:00.1: mlx_0_1 00:07:59.488 11:31:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.488 11:31:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:59.488 11:31:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:59.488 11:31:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:59.488 11:31:29 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:59.488 11:31:29 -- nvmf/common.sh@58 -- # uname 00:07:59.488 11:31:29 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:59.488 11:31:29 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:59.488 11:31:29 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:59.488 11:31:29 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:59.488 11:31:29 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:59.488 11:31:29 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:59.488 11:31:29 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:59.488 11:31:29 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:59.488 11:31:29 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:59.488 11:31:29 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:59.488 11:31:29 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:59.488 11:31:29 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:59.488 11:31:29 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:59.488 11:31:29 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:59.488 11:31:29 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:59.488 11:31:29 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:59.488 11:31:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:59.488 11:31:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:59.488 11:31:29 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:59.488 11:31:29 -- nvmf/common.sh@105 -- # continue 2 00:07:59.488 11:31:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:59.488 11:31:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:59.488 11:31:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:59.488 11:31:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:59.488 11:31:29 -- nvmf/common.sh@105 -- # continue 2 00:07:59.488 11:31:29 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:59.488 11:31:29 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:59.488 11:31:29 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:59.488 11:31:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:59.488 11:31:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:59.488 11:31:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:59.488 11:31:29 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:59.488 11:31:29 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:59.488 11:31:29 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:59.489 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:59.489 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:07:59.489 altname enp24s0f0np0 00:07:59.489 altname ens785f0np0 00:07:59.489 inet 192.168.100.8/24 scope global mlx_0_0 00:07:59.489 valid_lft forever preferred_lft forever 00:07:59.489 11:31:29 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:59.489 11:31:29 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:59.489 11:31:29 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:59.489 11:31:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:59.489 11:31:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:59.489 11:31:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:59.489 11:31:29 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:59.489 11:31:29 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:59.489 11:31:29 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:59.489 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:59.489 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:07:59.489 altname enp24s0f1np1 00:07:59.489 altname ens785f1np1 00:07:59.489 inet 192.168.100.9/24 scope global mlx_0_1 00:07:59.489 valid_lft forever preferred_lft forever 00:07:59.489 11:31:29 -- nvmf/common.sh@411 -- # return 0 00:07:59.489 11:31:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:59.489 11:31:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:59.489 11:31:29 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:59.489 11:31:29 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:59.489 11:31:29 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:59.489 11:31:29 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:59.489 11:31:29 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:59.489 11:31:29 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:59.489 11:31:29 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:59.489 11:31:29 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:59.489 11:31:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:59.489 11:31:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:59.489 11:31:29 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:59.489 11:31:29 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:59.489 11:31:29 -- nvmf/common.sh@105 -- # continue 2 00:07:59.489 11:31:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:59.489 11:31:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:59.489 11:31:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:59.489 11:31:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:59.489 11:31:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:59.489 11:31:29 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:59.489 11:31:29 -- nvmf/common.sh@105 -- # continue 2 00:07:59.489 11:31:29 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:59.489 11:31:29 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:59.489 11:31:29 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:59.489 11:31:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:59.489 11:31:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:59.489 11:31:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:59.489 11:31:29 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:59.489 11:31:29 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:59.489 11:31:29 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:59.489 11:31:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:59.489 11:31:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:59.489 11:31:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:59.489 11:31:29 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:59.489 192.168.100.9' 00:07:59.489 11:31:29 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:59.489 192.168.100.9' 00:07:59.489 11:31:29 -- nvmf/common.sh@446 -- # head -n 1 00:07:59.489 11:31:29 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:59.489 11:31:29 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:59.489 192.168.100.9' 00:07:59.489 11:31:29 -- nvmf/common.sh@447 -- # tail -n +2 00:07:59.489 11:31:29 -- nvmf/common.sh@447 -- # head -n 1 00:07:59.489 11:31:29 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:59.489 11:31:29 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:59.489 11:31:29 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:59.489 11:31:29 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:59.489 11:31:29 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:59.489 11:31:29 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:59.489 11:31:29 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:59.489 11:31:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:59.489 11:31:29 -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:59.489 11:31:29 -- common/autotest_common.sh@10 -- # set +x 00:07:59.489 11:31:29 -- nvmf/common.sh@470 -- # nvmfpid=2920400 00:07:59.489 11:31:29 -- nvmf/common.sh@471 -- # waitforlisten 2920400 00:07:59.489 11:31:29 -- common/autotest_common.sh@827 -- # '[' -z 2920400 ']' 00:07:59.489 11:31:29 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.489 11:31:29 -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:59.489 11:31:29 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.489 11:31:29 -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:59.489 11:31:29 -- common/autotest_common.sh@10 -- # set +x 00:07:59.489 11:31:29 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:59.489 [2024-05-15 11:31:29.485616] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:07:59.489 [2024-05-15 11:31:29.485674] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.489 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.489 [2024-05-15 11:31:29.557211] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:59.489 [2024-05-15 11:31:29.645478] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.489 [2024-05-15 11:31:29.645519] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.489 [2024-05-15 11:31:29.645529] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.489 [2024-05-15 11:31:29.645538] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.489 [2024-05-15 11:31:29.645546] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.489 [2024-05-15 11:31:29.645600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.489 [2024-05-15 11:31:29.645685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.489 [2024-05-15 11:31:29.645770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.489 [2024-05-15 11:31:29.645772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.749 11:31:30 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:59.749 11:31:30 -- common/autotest_common.sh@860 -- # return 0 00:07:59.749 11:31:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:59.749 11:31:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:59.749 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:07:59.749 11:31:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.749 11:31:30 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:59.749 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.749 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:07:59.749 [2024-05-15 11:31:30.368175] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b30f00/0x1b353f0) succeed. 00:07:59.749 [2024-05-15 11:31:30.378588] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b32540/0x1b76a80) succeed. 00:07:59.749 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.749 11:31:30 -- target/discovery.sh@26 -- # seq 1 4 00:08:00.008 11:31:30 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:00.008 11:31:30 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 Null1 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 [2024-05-15 11:31:30.554919] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:00.008 [2024-05-15 11:31:30.555297] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:00.008 11:31:30 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 Null2 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:00.008 11:31:30 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 Null3 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:00.008 11:31:30 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 Null4 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:00.008 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.008 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.008 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.008 11:31:30 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:08:00.267 00:08:00.267 Discovery Log Number of Records 6, Generation counter 6 00:08:00.267 =====Discovery Log Entry 0====== 00:08:00.267 trtype: rdma 00:08:00.267 adrfam: ipv4 00:08:00.267 subtype: current discovery subsystem 00:08:00.267 treq: not required 00:08:00.267 portid: 0 00:08:00.267 trsvcid: 4420 00:08:00.267 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:00.267 traddr: 192.168.100.8 00:08:00.267 eflags: explicit discovery connections, duplicate discovery information 00:08:00.267 rdma_prtype: not specified 00:08:00.267 rdma_qptype: connected 00:08:00.267 rdma_cms: rdma-cm 00:08:00.267 rdma_pkey: 0x0000 00:08:00.267 =====Discovery Log Entry 1====== 00:08:00.267 trtype: rdma 00:08:00.267 adrfam: ipv4 00:08:00.267 subtype: nvme subsystem 00:08:00.267 treq: not required 00:08:00.267 portid: 0 00:08:00.267 trsvcid: 4420 00:08:00.267 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:00.267 traddr: 192.168.100.8 00:08:00.267 eflags: none 00:08:00.267 rdma_prtype: not specified 00:08:00.267 rdma_qptype: connected 00:08:00.267 rdma_cms: rdma-cm 00:08:00.267 rdma_pkey: 0x0000 00:08:00.267 =====Discovery Log Entry 2====== 00:08:00.267 trtype: rdma 00:08:00.267 adrfam: ipv4 00:08:00.267 subtype: nvme subsystem 00:08:00.267 treq: not required 00:08:00.267 portid: 0 00:08:00.267 trsvcid: 4420 00:08:00.267 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:00.267 traddr: 192.168.100.8 00:08:00.267 eflags: none 00:08:00.267 rdma_prtype: not specified 00:08:00.267 rdma_qptype: connected 00:08:00.268 rdma_cms: rdma-cm 00:08:00.268 rdma_pkey: 0x0000 00:08:00.268 =====Discovery Log Entry 3====== 00:08:00.268 trtype: rdma 00:08:00.268 adrfam: ipv4 00:08:00.268 subtype: nvme subsystem 00:08:00.268 treq: not required 00:08:00.268 portid: 0 00:08:00.268 trsvcid: 4420 00:08:00.268 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:00.268 traddr: 192.168.100.8 00:08:00.268 eflags: none 00:08:00.268 rdma_prtype: not specified 00:08:00.268 rdma_qptype: connected 00:08:00.268 rdma_cms: rdma-cm 00:08:00.268 rdma_pkey: 0x0000 00:08:00.268 =====Discovery Log Entry 4====== 00:08:00.268 trtype: rdma 00:08:00.268 adrfam: ipv4 00:08:00.268 subtype: nvme subsystem 00:08:00.268 treq: not required 00:08:00.268 portid: 0 00:08:00.268 trsvcid: 4420 00:08:00.268 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:00.268 traddr: 192.168.100.8 00:08:00.268 eflags: none 00:08:00.268 rdma_prtype: not specified 00:08:00.268 rdma_qptype: connected 00:08:00.268 rdma_cms: rdma-cm 00:08:00.268 rdma_pkey: 0x0000 00:08:00.268 =====Discovery Log Entry 5====== 00:08:00.268 trtype: rdma 00:08:00.268 adrfam: ipv4 00:08:00.268 subtype: discovery subsystem referral 00:08:00.268 treq: not required 00:08:00.268 portid: 0 00:08:00.268 trsvcid: 4430 00:08:00.268 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:00.268 traddr: 192.168.100.8 00:08:00.268 eflags: none 00:08:00.268 rdma_prtype: unrecognized 00:08:00.268 rdma_qptype: unrecognized 00:08:00.268 rdma_cms: unrecognized 00:08:00.268 rdma_pkey: 0x0000 00:08:00.268 11:31:30 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:00.268 Perform nvmf subsystem discovery via RPC 00:08:00.268 11:31:30 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:00.268 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.268 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.268 [ 00:08:00.268 { 00:08:00.268 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:00.268 "subtype": "Discovery", 00:08:00.268 "listen_addresses": [ 00:08:00.268 { 00:08:00.268 "trtype": "RDMA", 00:08:00.268 "adrfam": "IPv4", 00:08:00.268 "traddr": "192.168.100.8", 00:08:00.268 "trsvcid": "4420" 00:08:00.268 } 00:08:00.268 ], 00:08:00.268 "allow_any_host": true, 00:08:00.268 "hosts": [] 00:08:00.268 }, 00:08:00.268 { 00:08:00.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:00.268 "subtype": "NVMe", 00:08:00.268 "listen_addresses": [ 00:08:00.268 { 00:08:00.268 "trtype": "RDMA", 00:08:00.268 "adrfam": "IPv4", 00:08:00.268 "traddr": "192.168.100.8", 00:08:00.268 "trsvcid": "4420" 00:08:00.268 } 00:08:00.268 ], 00:08:00.268 "allow_any_host": true, 00:08:00.268 "hosts": [], 00:08:00.268 "serial_number": "SPDK00000000000001", 00:08:00.268 "model_number": "SPDK bdev Controller", 00:08:00.268 "max_namespaces": 32, 00:08:00.268 "min_cntlid": 1, 00:08:00.268 "max_cntlid": 65519, 00:08:00.268 "namespaces": [ 00:08:00.268 { 00:08:00.268 "nsid": 1, 00:08:00.268 "bdev_name": "Null1", 00:08:00.268 "name": "Null1", 00:08:00.268 "nguid": "A78A286D86334DB28EAEFC5A600F337F", 00:08:00.268 "uuid": "a78a286d-8633-4db2-8eae-fc5a600f337f" 00:08:00.268 } 00:08:00.268 ] 00:08:00.268 }, 00:08:00.268 { 00:08:00.268 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:00.268 "subtype": "NVMe", 00:08:00.268 "listen_addresses": [ 00:08:00.268 { 00:08:00.268 "trtype": "RDMA", 00:08:00.268 "adrfam": "IPv4", 00:08:00.268 "traddr": "192.168.100.8", 00:08:00.268 "trsvcid": "4420" 00:08:00.268 } 00:08:00.268 ], 00:08:00.268 "allow_any_host": true, 00:08:00.268 "hosts": [], 00:08:00.268 "serial_number": "SPDK00000000000002", 00:08:00.268 "model_number": "SPDK bdev Controller", 00:08:00.268 "max_namespaces": 32, 00:08:00.268 "min_cntlid": 1, 00:08:00.268 "max_cntlid": 65519, 00:08:00.268 "namespaces": [ 00:08:00.268 { 00:08:00.268 "nsid": 1, 00:08:00.268 "bdev_name": "Null2", 00:08:00.268 "name": "Null2", 00:08:00.268 "nguid": "3316CE9CAFDB4DDF82F49B9BDAD71C27", 00:08:00.268 "uuid": "3316ce9c-afdb-4ddf-82f4-9b9bdad71c27" 00:08:00.268 } 00:08:00.268 ] 00:08:00.268 }, 00:08:00.268 { 00:08:00.268 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:00.268 "subtype": "NVMe", 00:08:00.268 "listen_addresses": [ 00:08:00.268 { 00:08:00.268 "trtype": "RDMA", 00:08:00.268 "adrfam": "IPv4", 00:08:00.268 "traddr": "192.168.100.8", 00:08:00.268 "trsvcid": "4420" 00:08:00.268 } 00:08:00.268 ], 00:08:00.268 "allow_any_host": true, 00:08:00.268 "hosts": [], 00:08:00.268 "serial_number": "SPDK00000000000003", 00:08:00.268 "model_number": "SPDK bdev Controller", 00:08:00.268 "max_namespaces": 32, 00:08:00.268 "min_cntlid": 1, 00:08:00.268 "max_cntlid": 65519, 00:08:00.268 "namespaces": [ 00:08:00.268 { 00:08:00.268 "nsid": 1, 00:08:00.268 "bdev_name": "Null3", 00:08:00.268 "name": "Null3", 00:08:00.268 "nguid": "F55FEA21A2A244D9A61EF2F8818F758A", 00:08:00.268 "uuid": "f55fea21-a2a2-44d9-a61e-f2f8818f758a" 00:08:00.268 } 00:08:00.268 ] 00:08:00.268 }, 00:08:00.268 { 00:08:00.268 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:00.268 "subtype": "NVMe", 00:08:00.268 "listen_addresses": [ 00:08:00.268 { 00:08:00.268 "trtype": "RDMA", 00:08:00.268 "adrfam": "IPv4", 00:08:00.268 "traddr": "192.168.100.8", 00:08:00.268 "trsvcid": "4420" 00:08:00.268 } 00:08:00.268 ], 00:08:00.268 "allow_any_host": true, 00:08:00.268 "hosts": [], 00:08:00.268 "serial_number": "SPDK00000000000004", 00:08:00.268 "model_number": "SPDK bdev Controller", 00:08:00.268 "max_namespaces": 32, 00:08:00.268 "min_cntlid": 1, 00:08:00.268 "max_cntlid": 65519, 00:08:00.268 "namespaces": [ 00:08:00.268 { 00:08:00.268 "nsid": 1, 00:08:00.268 "bdev_name": "Null4", 00:08:00.268 "name": "Null4", 00:08:00.268 "nguid": "3CD932211BBD42BD817F17E8986EB547", 00:08:00.268 "uuid": "3cd93221-1bbd-42bd-817f-17e8986eb547" 00:08:00.268 } 00:08:00.268 ] 00:08:00.268 } 00:08:00.268 ] 00:08:00.268 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.268 11:31:30 -- target/discovery.sh@42 -- # seq 1 4 00:08:00.268 11:31:30 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:00.268 11:31:30 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.268 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.268 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.268 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.268 11:31:30 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:00.268 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.268 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.268 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.268 11:31:30 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:00.268 11:31:30 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:00.268 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.268 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.268 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.268 11:31:30 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:00.268 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.268 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.269 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.269 11:31:30 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:00.269 11:31:30 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:00.269 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.269 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.269 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.269 11:31:30 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:00.269 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.269 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.269 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.269 11:31:30 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:00.269 11:31:30 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:00.269 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.269 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.269 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.269 11:31:30 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:00.269 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.269 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.269 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.269 11:31:30 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:00.269 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.269 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.269 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.269 11:31:30 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:00.269 11:31:30 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:00.269 11:31:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.269 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.269 11:31:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.269 11:31:30 -- target/discovery.sh@49 -- # check_bdevs= 00:08:00.269 11:31:30 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:00.269 11:31:30 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:00.269 11:31:30 -- target/discovery.sh@57 -- # nvmftestfini 00:08:00.269 11:31:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:00.269 11:31:30 -- nvmf/common.sh@117 -- # sync 00:08:00.269 11:31:30 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:00.269 11:31:30 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:00.269 11:31:30 -- nvmf/common.sh@120 -- # set +e 00:08:00.269 11:31:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:00.269 11:31:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:00.269 rmmod nvme_rdma 00:08:00.269 rmmod nvme_fabrics 00:08:00.269 11:31:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:00.269 11:31:30 -- nvmf/common.sh@124 -- # set -e 00:08:00.269 11:31:30 -- nvmf/common.sh@125 -- # return 0 00:08:00.269 11:31:30 -- nvmf/common.sh@478 -- # '[' -n 2920400 ']' 00:08:00.269 11:31:30 -- nvmf/common.sh@479 -- # killprocess 2920400 00:08:00.269 11:31:30 -- common/autotest_common.sh@946 -- # '[' -z 2920400 ']' 00:08:00.269 11:31:30 -- common/autotest_common.sh@950 -- # kill -0 2920400 00:08:00.269 11:31:30 -- common/autotest_common.sh@951 -- # uname 00:08:00.269 11:31:30 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:00.269 11:31:30 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2920400 00:08:00.531 11:31:31 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:00.531 11:31:31 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:00.531 11:31:31 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2920400' 00:08:00.531 killing process with pid 2920400 00:08:00.531 11:31:31 -- common/autotest_common.sh@965 -- # kill 2920400 00:08:00.531 [2024-05-15 11:31:31.040047] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:00.531 11:31:31 -- common/autotest_common.sh@970 -- # wait 2920400 00:08:00.531 [2024-05-15 11:31:31.126553] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:00.850 11:31:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:00.850 11:31:31 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:00.850 00:08:00.850 real 0m8.035s 00:08:00.850 user 0m8.396s 00:08:00.850 sys 0m5.087s 00:08:00.850 11:31:31 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.850 11:31:31 -- common/autotest_common.sh@10 -- # set +x 00:08:00.850 ************************************ 00:08:00.850 END TEST nvmf_target_discovery 00:08:00.850 ************************************ 00:08:00.850 11:31:31 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:00.850 11:31:31 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:00.850 11:31:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:00.850 11:31:31 -- common/autotest_common.sh@10 -- # set +x 00:08:00.850 ************************************ 00:08:00.850 START TEST nvmf_referrals 00:08:00.850 ************************************ 00:08:00.850 11:31:31 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:00.850 * Looking for test storage... 00:08:00.850 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:00.850 11:31:31 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.850 11:31:31 -- nvmf/common.sh@7 -- # uname -s 00:08:00.850 11:31:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.850 11:31:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.850 11:31:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.850 11:31:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.850 11:31:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.850 11:31:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.850 11:31:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.851 11:31:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.851 11:31:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.851 11:31:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.851 11:31:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:00.851 11:31:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:08:00.851 11:31:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.851 11:31:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.851 11:31:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.851 11:31:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.851 11:31:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:00.851 11:31:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.851 11:31:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.851 11:31:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.851 11:31:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.851 11:31:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.851 11:31:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.851 11:31:31 -- paths/export.sh@5 -- # export PATH 00:08:00.851 11:31:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.851 11:31:31 -- nvmf/common.sh@47 -- # : 0 00:08:00.851 11:31:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.851 11:31:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.851 11:31:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.851 11:31:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.851 11:31:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.851 11:31:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.851 11:31:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.851 11:31:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.851 11:31:31 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:00.851 11:31:31 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:00.851 11:31:31 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:00.851 11:31:31 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:00.851 11:31:31 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:00.851 11:31:31 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:00.851 11:31:31 -- target/referrals.sh@37 -- # nvmftestinit 00:08:00.851 11:31:31 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:00.851 11:31:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.851 11:31:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:00.851 11:31:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:00.851 11:31:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:00.851 11:31:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.851 11:31:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.851 11:31:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.851 11:31:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:00.851 11:31:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:00.851 11:31:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:00.851 11:31:31 -- common/autotest_common.sh@10 -- # set +x 00:08:06.150 11:31:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:06.150 11:31:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:06.150 11:31:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:06.150 11:31:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:06.150 11:31:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:06.150 11:31:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:06.150 11:31:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:06.150 11:31:36 -- nvmf/common.sh@295 -- # net_devs=() 00:08:06.150 11:31:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:06.150 11:31:36 -- nvmf/common.sh@296 -- # e810=() 00:08:06.150 11:31:36 -- nvmf/common.sh@296 -- # local -ga e810 00:08:06.150 11:31:36 -- nvmf/common.sh@297 -- # x722=() 00:08:06.150 11:31:36 -- nvmf/common.sh@297 -- # local -ga x722 00:08:06.150 11:31:36 -- nvmf/common.sh@298 -- # mlx=() 00:08:06.150 11:31:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:06.150 11:31:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.150 11:31:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.150 11:31:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.150 11:31:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.150 11:31:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.150 11:31:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.150 11:31:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.150 11:31:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.150 11:31:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.150 11:31:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.150 11:31:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.150 11:31:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:06.150 11:31:36 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:06.150 11:31:36 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:06.150 11:31:36 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:06.150 11:31:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:06.150 11:31:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.150 11:31:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:06.150 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:06.150 11:31:36 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:06.150 11:31:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.150 11:31:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:06.150 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:06.150 11:31:36 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:06.150 11:31:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:06.150 11:31:36 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.150 11:31:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.150 11:31:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:06.150 11:31:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.150 11:31:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:06.150 Found net devices under 0000:18:00.0: mlx_0_0 00:08:06.150 11:31:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.150 11:31:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.150 11:31:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.150 11:31:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:06.150 11:31:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.150 11:31:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:06.150 Found net devices under 0000:18:00.1: mlx_0_1 00:08:06.150 11:31:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.150 11:31:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:06.150 11:31:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:06.150 11:31:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:06.150 11:31:36 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:06.150 11:31:36 -- nvmf/common.sh@58 -- # uname 00:08:06.150 11:31:36 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:06.150 11:31:36 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:06.150 11:31:36 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:06.150 11:31:36 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:06.150 11:31:36 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:06.150 11:31:36 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:06.150 11:31:36 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:06.150 11:31:36 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:06.150 11:31:36 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:06.150 11:31:36 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:06.150 11:31:36 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:06.150 11:31:36 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:06.150 11:31:36 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:06.150 11:31:36 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:06.150 11:31:36 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:06.150 11:31:36 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:06.150 11:31:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:06.150 11:31:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.150 11:31:36 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:06.150 11:31:36 -- nvmf/common.sh@105 -- # continue 2 00:08:06.150 11:31:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:06.150 11:31:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.150 11:31:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:06.150 11:31:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.150 11:31:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:06.151 11:31:36 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:06.151 11:31:36 -- nvmf/common.sh@105 -- # continue 2 00:08:06.151 11:31:36 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:06.151 11:31:36 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:06.151 11:31:36 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:06.151 11:31:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:06.151 11:31:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:06.151 11:31:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:06.151 11:31:36 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:06.151 11:31:36 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:06.151 11:31:36 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:06.151 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:06.151 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:08:06.151 altname enp24s0f0np0 00:08:06.151 altname ens785f0np0 00:08:06.151 inet 192.168.100.8/24 scope global mlx_0_0 00:08:06.151 valid_lft forever preferred_lft forever 00:08:06.151 11:31:36 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:06.151 11:31:36 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:06.151 11:31:36 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:06.151 11:31:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:06.151 11:31:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:06.151 11:31:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:06.151 11:31:36 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:06.151 11:31:36 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:06.151 11:31:36 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:06.151 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:06.151 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:08:06.151 altname enp24s0f1np1 00:08:06.151 altname ens785f1np1 00:08:06.151 inet 192.168.100.9/24 scope global mlx_0_1 00:08:06.151 valid_lft forever preferred_lft forever 00:08:06.151 11:31:36 -- nvmf/common.sh@411 -- # return 0 00:08:06.151 11:31:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:06.151 11:31:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:06.151 11:31:36 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:06.151 11:31:36 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:06.151 11:31:36 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:06.151 11:31:36 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:06.151 11:31:36 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:06.151 11:31:36 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:06.151 11:31:36 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:06.410 11:31:36 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:06.410 11:31:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:06.410 11:31:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.410 11:31:36 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:06.410 11:31:36 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:06.410 11:31:36 -- nvmf/common.sh@105 -- # continue 2 00:08:06.410 11:31:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:06.410 11:31:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.410 11:31:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:06.410 11:31:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.410 11:31:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:06.410 11:31:36 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:06.410 11:31:36 -- nvmf/common.sh@105 -- # continue 2 00:08:06.410 11:31:36 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:06.410 11:31:36 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:06.410 11:31:36 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:06.410 11:31:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:06.410 11:31:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:06.410 11:31:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:06.410 11:31:36 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:06.410 11:31:36 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:06.410 11:31:36 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:06.410 11:31:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:06.410 11:31:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:06.410 11:31:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:06.410 11:31:36 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:06.410 192.168.100.9' 00:08:06.410 11:31:36 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:06.410 192.168.100.9' 00:08:06.410 11:31:36 -- nvmf/common.sh@446 -- # head -n 1 00:08:06.410 11:31:36 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:06.410 11:31:36 -- nvmf/common.sh@447 -- # tail -n +2 00:08:06.410 11:31:36 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:06.410 192.168.100.9' 00:08:06.410 11:31:36 -- nvmf/common.sh@447 -- # head -n 1 00:08:06.410 11:31:36 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:06.410 11:31:36 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:06.410 11:31:36 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:06.410 11:31:36 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:06.410 11:31:36 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:06.410 11:31:36 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:06.410 11:31:36 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:06.410 11:31:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:06.410 11:31:36 -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:06.410 11:31:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.410 11:31:36 -- nvmf/common.sh@470 -- # nvmfpid=2923499 00:08:06.410 11:31:36 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:06.410 11:31:36 -- nvmf/common.sh@471 -- # waitforlisten 2923499 00:08:06.410 11:31:36 -- common/autotest_common.sh@827 -- # '[' -z 2923499 ']' 00:08:06.410 11:31:36 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.410 11:31:36 -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:06.410 11:31:37 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.410 11:31:37 -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:06.410 11:31:37 -- common/autotest_common.sh@10 -- # set +x 00:08:06.410 [2024-05-15 11:31:37.049737] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:08:06.410 [2024-05-15 11:31:37.049796] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.410 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.410 [2024-05-15 11:31:37.121758] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.669 [2024-05-15 11:31:37.212594] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.669 [2024-05-15 11:31:37.212640] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.669 [2024-05-15 11:31:37.212649] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.669 [2024-05-15 11:31:37.212658] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.669 [2024-05-15 11:31:37.212669] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.669 [2024-05-15 11:31:37.212720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.669 [2024-05-15 11:31:37.212810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.669 [2024-05-15 11:31:37.212888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.669 [2024-05-15 11:31:37.212889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.238 11:31:37 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:07.238 11:31:37 -- common/autotest_common.sh@860 -- # return 0 00:08:07.238 11:31:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:07.238 11:31:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:07.238 11:31:37 -- common/autotest_common.sh@10 -- # set +x 00:08:07.238 11:31:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.238 11:31:37 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:07.238 11:31:37 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.238 11:31:37 -- common/autotest_common.sh@10 -- # set +x 00:08:07.238 [2024-05-15 11:31:37.930032] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6adf00/0x6b23f0) succeed. 00:08:07.238 [2024-05-15 11:31:37.940644] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6af540/0x6f3a80) succeed. 00:08:07.498 11:31:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.498 11:31:38 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:07.498 11:31:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.498 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:07.498 [2024-05-15 11:31:38.073747] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:07.498 [2024-05-15 11:31:38.074083] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:07.498 11:31:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.498 11:31:38 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:07.498 11:31:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.498 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:07.498 11:31:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.498 11:31:38 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:07.498 11:31:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.498 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:07.498 11:31:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.498 11:31:38 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:07.498 11:31:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.498 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:07.498 11:31:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.498 11:31:38 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:07.498 11:31:38 -- target/referrals.sh@48 -- # jq length 00:08:07.498 11:31:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.498 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:07.498 11:31:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.498 11:31:38 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:07.498 11:31:38 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:07.498 11:31:38 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:07.498 11:31:38 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:07.498 11:31:38 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:07.498 11:31:38 -- target/referrals.sh@21 -- # sort 00:08:07.498 11:31:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.498 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:07.498 11:31:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.498 11:31:38 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:07.498 11:31:38 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:07.498 11:31:38 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:07.498 11:31:38 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:07.498 11:31:38 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:07.498 11:31:38 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:07.498 11:31:38 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:07.498 11:31:38 -- target/referrals.sh@26 -- # sort 00:08:07.757 11:31:38 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:07.757 11:31:38 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:07.757 11:31:38 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:07.757 11:31:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.757 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:07.757 11:31:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.757 11:31:38 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:07.757 11:31:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.757 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:07.757 11:31:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.757 11:31:38 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:07.757 11:31:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.757 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:07.757 11:31:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.757 11:31:38 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:07.757 11:31:38 -- target/referrals.sh@56 -- # jq length 00:08:07.757 11:31:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.757 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:07.757 11:31:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.757 11:31:38 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:07.757 11:31:38 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:07.757 11:31:38 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:07.757 11:31:38 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:07.757 11:31:38 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:07.757 11:31:38 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:07.757 11:31:38 -- target/referrals.sh@26 -- # sort 00:08:07.757 11:31:38 -- target/referrals.sh@26 -- # echo 00:08:07.757 11:31:38 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:07.757 11:31:38 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:07.757 11:31:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.757 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:07.757 11:31:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.757 11:31:38 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:07.757 11:31:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.757 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:07.757 11:31:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.757 11:31:38 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:07.757 11:31:38 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:07.757 11:31:38 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:07.757 11:31:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.757 11:31:38 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:07.757 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:07.757 11:31:38 -- target/referrals.sh@21 -- # sort 00:08:07.757 11:31:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.015 11:31:38 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:08.015 11:31:38 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:08.015 11:31:38 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:08.015 11:31:38 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:08.015 11:31:38 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:08.015 11:31:38 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:08.015 11:31:38 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:08.015 11:31:38 -- target/referrals.sh@26 -- # sort 00:08:08.015 11:31:38 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:08.015 11:31:38 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:08.015 11:31:38 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:08.015 11:31:38 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:08.015 11:31:38 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:08.015 11:31:38 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:08.015 11:31:38 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:08.015 11:31:38 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:08.015 11:31:38 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:08.015 11:31:38 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:08.015 11:31:38 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:08.015 11:31:38 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:08.015 11:31:38 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:08.274 11:31:38 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:08.274 11:31:38 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:08.274 11:31:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.274 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:08.274 11:31:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.274 11:31:38 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:08.274 11:31:38 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:08.274 11:31:38 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:08.274 11:31:38 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:08.274 11:31:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.274 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:08.274 11:31:38 -- target/referrals.sh@21 -- # sort 00:08:08.274 11:31:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.274 11:31:38 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:08.274 11:31:38 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:08.274 11:31:38 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:08.274 11:31:38 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:08.274 11:31:38 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:08.274 11:31:38 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:08.274 11:31:38 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:08.274 11:31:38 -- target/referrals.sh@26 -- # sort 00:08:08.274 11:31:38 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:08.274 11:31:38 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:08.274 11:31:38 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:08.274 11:31:38 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:08.274 11:31:38 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:08.274 11:31:38 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:08.274 11:31:38 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:08.533 11:31:39 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:08.533 11:31:39 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:08.533 11:31:39 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:08.533 11:31:39 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:08.533 11:31:39 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:08.533 11:31:39 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:08.533 11:31:39 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:08.533 11:31:39 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:08.533 11:31:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.533 11:31:39 -- common/autotest_common.sh@10 -- # set +x 00:08:08.533 11:31:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.533 11:31:39 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:08.533 11:31:39 -- target/referrals.sh@82 -- # jq length 00:08:08.533 11:31:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.533 11:31:39 -- common/autotest_common.sh@10 -- # set +x 00:08:08.533 11:31:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.533 11:31:39 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:08.533 11:31:39 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:08.533 11:31:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:08.533 11:31:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:08.533 11:31:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:08.533 11:31:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:08.533 11:31:39 -- target/referrals.sh@26 -- # sort 00:08:08.792 11:31:39 -- target/referrals.sh@26 -- # echo 00:08:08.792 11:31:39 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:08.792 11:31:39 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:08.792 11:31:39 -- target/referrals.sh@86 -- # nvmftestfini 00:08:08.792 11:31:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:08.792 11:31:39 -- nvmf/common.sh@117 -- # sync 00:08:08.792 11:31:39 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:08.792 11:31:39 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:08.792 11:31:39 -- nvmf/common.sh@120 -- # set +e 00:08:08.792 11:31:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.792 11:31:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:08.792 rmmod nvme_rdma 00:08:08.792 rmmod nvme_fabrics 00:08:08.792 11:31:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.792 11:31:39 -- nvmf/common.sh@124 -- # set -e 00:08:08.792 11:31:39 -- nvmf/common.sh@125 -- # return 0 00:08:08.792 11:31:39 -- nvmf/common.sh@478 -- # '[' -n 2923499 ']' 00:08:08.792 11:31:39 -- nvmf/common.sh@479 -- # killprocess 2923499 00:08:08.792 11:31:39 -- common/autotest_common.sh@946 -- # '[' -z 2923499 ']' 00:08:08.792 11:31:39 -- common/autotest_common.sh@950 -- # kill -0 2923499 00:08:08.792 11:31:39 -- common/autotest_common.sh@951 -- # uname 00:08:08.792 11:31:39 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:08.792 11:31:39 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2923499 00:08:08.792 11:31:39 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:08.792 11:31:39 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:08.792 11:31:39 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2923499' 00:08:08.792 killing process with pid 2923499 00:08:08.792 11:31:39 -- common/autotest_common.sh@965 -- # kill 2923499 00:08:08.792 [2024-05-15 11:31:39.405237] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:08.792 11:31:39 -- common/autotest_common.sh@970 -- # wait 2923499 00:08:08.792 [2024-05-15 11:31:39.488068] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:09.052 11:31:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:09.052 11:31:39 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:09.052 00:08:09.052 real 0m8.278s 00:08:09.052 user 0m12.065s 00:08:09.052 sys 0m5.012s 00:08:09.052 11:31:39 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.052 11:31:39 -- common/autotest_common.sh@10 -- # set +x 00:08:09.052 ************************************ 00:08:09.052 END TEST nvmf_referrals 00:08:09.052 ************************************ 00:08:09.052 11:31:39 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:09.052 11:31:39 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:09.052 11:31:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.052 11:31:39 -- common/autotest_common.sh@10 -- # set +x 00:08:09.052 ************************************ 00:08:09.052 START TEST nvmf_connect_disconnect 00:08:09.052 ************************************ 00:08:09.052 11:31:39 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:09.312 * Looking for test storage... 00:08:09.312 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:09.312 11:31:39 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.312 11:31:39 -- nvmf/common.sh@7 -- # uname -s 00:08:09.312 11:31:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.312 11:31:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.312 11:31:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.312 11:31:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.312 11:31:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.312 11:31:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.312 11:31:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.312 11:31:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.312 11:31:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.312 11:31:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.312 11:31:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:09.312 11:31:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:08:09.312 11:31:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.312 11:31:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.312 11:31:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.312 11:31:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.312 11:31:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:09.312 11:31:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.312 11:31:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.312 11:31:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.312 11:31:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.312 11:31:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.312 11:31:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.312 11:31:39 -- paths/export.sh@5 -- # export PATH 00:08:09.312 11:31:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.312 11:31:39 -- nvmf/common.sh@47 -- # : 0 00:08:09.312 11:31:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.312 11:31:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.312 11:31:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.312 11:31:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.312 11:31:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.312 11:31:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.312 11:31:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.312 11:31:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.312 11:31:39 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:09.312 11:31:39 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:09.312 11:31:39 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:09.312 11:31:39 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:09.312 11:31:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.312 11:31:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:09.312 11:31:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:09.312 11:31:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:09.312 11:31:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.312 11:31:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.312 11:31:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.312 11:31:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:09.312 11:31:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:09.312 11:31:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:09.312 11:31:39 -- common/autotest_common.sh@10 -- # set +x 00:08:15.882 11:31:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:15.882 11:31:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:15.882 11:31:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:15.882 11:31:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:15.882 11:31:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:15.882 11:31:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:15.882 11:31:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:15.882 11:31:45 -- nvmf/common.sh@295 -- # net_devs=() 00:08:15.882 11:31:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:15.882 11:31:45 -- nvmf/common.sh@296 -- # e810=() 00:08:15.882 11:31:45 -- nvmf/common.sh@296 -- # local -ga e810 00:08:15.882 11:31:45 -- nvmf/common.sh@297 -- # x722=() 00:08:15.882 11:31:45 -- nvmf/common.sh@297 -- # local -ga x722 00:08:15.882 11:31:45 -- nvmf/common.sh@298 -- # mlx=() 00:08:15.882 11:31:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:15.882 11:31:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.882 11:31:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.882 11:31:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.882 11:31:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.882 11:31:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.882 11:31:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.882 11:31:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.882 11:31:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.882 11:31:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.882 11:31:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.882 11:31:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.882 11:31:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:15.882 11:31:45 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:15.882 11:31:45 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:15.882 11:31:45 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:15.882 11:31:45 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:15.882 11:31:45 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:15.882 11:31:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:15.882 11:31:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.882 11:31:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:15.882 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:15.882 11:31:45 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:15.882 11:31:45 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:15.882 11:31:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:15.882 11:31:45 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:15.882 11:31:45 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:15.882 11:31:45 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:15.882 11:31:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.882 11:31:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:15.882 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:15.882 11:31:45 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:15.882 11:31:45 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:15.882 11:31:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:15.882 11:31:45 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:15.882 11:31:45 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:15.882 11:31:45 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:15.882 11:31:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:15.882 11:31:45 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:15.882 11:31:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.882 11:31:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.882 11:31:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:15.882 11:31:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.882 11:31:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:15.882 Found net devices under 0000:18:00.0: mlx_0_0 00:08:15.882 11:31:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.882 11:31:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.882 11:31:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.882 11:31:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:15.882 11:31:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.882 11:31:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:15.882 Found net devices under 0000:18:00.1: mlx_0_1 00:08:15.882 11:31:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.882 11:31:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:15.882 11:31:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:15.882 11:31:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:15.882 11:31:45 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:15.882 11:31:45 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:15.882 11:31:45 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:15.882 11:31:45 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:15.882 11:31:45 -- nvmf/common.sh@58 -- # uname 00:08:15.882 11:31:45 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:15.882 11:31:45 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:15.882 11:31:46 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:15.882 11:31:46 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:15.882 11:31:46 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:15.882 11:31:46 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:15.882 11:31:46 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:15.882 11:31:46 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:15.882 11:31:46 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:15.882 11:31:46 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:15.882 11:31:46 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:15.882 11:31:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:15.882 11:31:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:15.882 11:31:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:15.882 11:31:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:15.882 11:31:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:15.882 11:31:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:15.882 11:31:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.882 11:31:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:15.882 11:31:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:15.882 11:31:46 -- nvmf/common.sh@105 -- # continue 2 00:08:15.882 11:31:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:15.882 11:31:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.882 11:31:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:15.882 11:31:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.883 11:31:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:15.883 11:31:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:15.883 11:31:46 -- nvmf/common.sh@105 -- # continue 2 00:08:15.883 11:31:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:15.883 11:31:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:15.883 11:31:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:15.883 11:31:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:15.883 11:31:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:15.883 11:31:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:15.883 11:31:46 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:15.883 11:31:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:15.883 11:31:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:15.883 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:15.883 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:08:15.883 altname enp24s0f0np0 00:08:15.883 altname ens785f0np0 00:08:15.883 inet 192.168.100.8/24 scope global mlx_0_0 00:08:15.883 valid_lft forever preferred_lft forever 00:08:15.883 11:31:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:15.883 11:31:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:15.883 11:31:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:15.883 11:31:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:15.883 11:31:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:15.883 11:31:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:15.883 11:31:46 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:15.883 11:31:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:15.883 11:31:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:15.883 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:15.883 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:08:15.883 altname enp24s0f1np1 00:08:15.883 altname ens785f1np1 00:08:15.883 inet 192.168.100.9/24 scope global mlx_0_1 00:08:15.883 valid_lft forever preferred_lft forever 00:08:15.883 11:31:46 -- nvmf/common.sh@411 -- # return 0 00:08:15.883 11:31:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:15.883 11:31:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:15.883 11:31:46 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:15.883 11:31:46 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:15.883 11:31:46 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:15.883 11:31:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:15.883 11:31:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:15.883 11:31:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:15.883 11:31:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:15.883 11:31:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:15.883 11:31:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:15.883 11:31:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.883 11:31:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:15.883 11:31:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:15.883 11:31:46 -- nvmf/common.sh@105 -- # continue 2 00:08:15.883 11:31:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:15.883 11:31:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.883 11:31:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:15.883 11:31:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.883 11:31:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:15.883 11:31:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:15.883 11:31:46 -- nvmf/common.sh@105 -- # continue 2 00:08:15.883 11:31:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:15.883 11:31:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:15.883 11:31:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:15.883 11:31:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:15.883 11:31:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:15.883 11:31:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:15.883 11:31:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:15.883 11:31:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:15.883 11:31:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:15.883 11:31:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:15.883 11:31:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:15.883 11:31:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:15.883 11:31:46 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:15.883 192.168.100.9' 00:08:15.883 11:31:46 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:15.883 192.168.100.9' 00:08:15.883 11:31:46 -- nvmf/common.sh@446 -- # head -n 1 00:08:15.883 11:31:46 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:15.883 11:31:46 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:15.883 192.168.100.9' 00:08:15.883 11:31:46 -- nvmf/common.sh@447 -- # tail -n +2 00:08:15.883 11:31:46 -- nvmf/common.sh@447 -- # head -n 1 00:08:15.883 11:31:46 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:15.883 11:31:46 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:15.883 11:31:46 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:15.883 11:31:46 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:15.883 11:31:46 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:15.883 11:31:46 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:15.883 11:31:46 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:15.883 11:31:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:15.883 11:31:46 -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:15.883 11:31:46 -- common/autotest_common.sh@10 -- # set +x 00:08:15.883 11:31:46 -- nvmf/common.sh@470 -- # nvmfpid=2926775 00:08:15.883 11:31:46 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.883 11:31:46 -- nvmf/common.sh@471 -- # waitforlisten 2926775 00:08:15.883 11:31:46 -- common/autotest_common.sh@827 -- # '[' -z 2926775 ']' 00:08:15.883 11:31:46 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.883 11:31:46 -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:15.883 11:31:46 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.883 11:31:46 -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:15.883 11:31:46 -- common/autotest_common.sh@10 -- # set +x 00:08:15.883 [2024-05-15 11:31:46.255584] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:08:15.883 [2024-05-15 11:31:46.255639] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.883 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.883 [2024-05-15 11:31:46.328916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.883 [2024-05-15 11:31:46.420752] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.883 [2024-05-15 11:31:46.420794] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.883 [2024-05-15 11:31:46.420804] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.883 [2024-05-15 11:31:46.420812] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.883 [2024-05-15 11:31:46.420820] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.883 [2024-05-15 11:31:46.420912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.883 [2024-05-15 11:31:46.420998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.883 [2024-05-15 11:31:46.421092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.883 [2024-05-15 11:31:46.421099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.451 11:31:47 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:16.451 11:31:47 -- common/autotest_common.sh@860 -- # return 0 00:08:16.451 11:31:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:16.451 11:31:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.451 11:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:16.451 11:31:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.451 11:31:47 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:16.451 11:31:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.451 11:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:16.451 [2024-05-15 11:31:47.125970] rdma.c:2712:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:16.451 [2024-05-15 11:31:47.147866] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1035f00/0x103a3f0) succeed. 00:08:16.451 [2024-05-15 11:31:47.158461] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1037540/0x107ba80) succeed. 00:08:16.711 11:31:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.711 11:31:47 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:16.711 11:31:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.711 11:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:16.711 11:31:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.711 11:31:47 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:16.711 11:31:47 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:16.711 11:31:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.711 11:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:16.711 11:31:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.711 11:31:47 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:16.711 11:31:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.711 11:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:16.711 11:31:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.711 11:31:47 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:16.711 11:31:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.711 11:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:16.711 [2024-05-15 11:31:47.305644] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:16.711 [2024-05-15 11:31:47.306033] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:16.711 11:31:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.711 11:31:47 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:16.711 11:31:47 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:16.711 11:31:47 -- target/connect_disconnect.sh@34 -- # set +x 00:08:20.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.771 11:32:07 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:36.771 11:32:07 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:36.771 11:32:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:36.771 11:32:07 -- nvmf/common.sh@117 -- # sync 00:08:36.771 11:32:07 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:36.771 11:32:07 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:36.771 11:32:07 -- nvmf/common.sh@120 -- # set +e 00:08:36.771 11:32:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:36.771 11:32:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:36.771 rmmod nvme_rdma 00:08:36.771 rmmod nvme_fabrics 00:08:36.771 11:32:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:36.771 11:32:07 -- nvmf/common.sh@124 -- # set -e 00:08:36.771 11:32:07 -- nvmf/common.sh@125 -- # return 0 00:08:36.771 11:32:07 -- nvmf/common.sh@478 -- # '[' -n 2926775 ']' 00:08:36.771 11:32:07 -- nvmf/common.sh@479 -- # killprocess 2926775 00:08:36.771 11:32:07 -- common/autotest_common.sh@946 -- # '[' -z 2926775 ']' 00:08:36.771 11:32:07 -- common/autotest_common.sh@950 -- # kill -0 2926775 00:08:36.771 11:32:07 -- common/autotest_common.sh@951 -- # uname 00:08:36.771 11:32:07 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:36.771 11:32:07 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2926775 00:08:36.771 11:32:07 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:36.771 11:32:07 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:36.771 11:32:07 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2926775' 00:08:36.771 killing process with pid 2926775 00:08:36.771 11:32:07 -- common/autotest_common.sh@965 -- # kill 2926775 00:08:36.771 [2024-05-15 11:32:07.297148] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:36.771 11:32:07 -- common/autotest_common.sh@970 -- # wait 2926775 00:08:36.772 [2024-05-15 11:32:07.349892] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:37.030 11:32:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:37.030 11:32:07 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:37.030 00:08:37.030 real 0m27.814s 00:08:37.030 user 1m25.838s 00:08:37.030 sys 0m5.923s 00:08:37.030 11:32:07 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:37.030 11:32:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.030 ************************************ 00:08:37.030 END TEST nvmf_connect_disconnect 00:08:37.030 ************************************ 00:08:37.030 11:32:07 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:08:37.030 11:32:07 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:37.030 11:32:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:37.030 11:32:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.030 ************************************ 00:08:37.030 START TEST nvmf_multitarget 00:08:37.030 ************************************ 00:08:37.030 11:32:07 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:08:37.030 * Looking for test storage... 00:08:37.030 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:37.030 11:32:07 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.030 11:32:07 -- nvmf/common.sh@7 -- # uname -s 00:08:37.030 11:32:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.030 11:32:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.030 11:32:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.030 11:32:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.030 11:32:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.030 11:32:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.030 11:32:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.030 11:32:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.030 11:32:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.030 11:32:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.030 11:32:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:37.030 11:32:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:08:37.030 11:32:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.030 11:32:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.030 11:32:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.030 11:32:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.030 11:32:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:37.030 11:32:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.030 11:32:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.030 11:32:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.030 11:32:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.030 11:32:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.030 11:32:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.030 11:32:07 -- paths/export.sh@5 -- # export PATH 00:08:37.030 11:32:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.030 11:32:07 -- nvmf/common.sh@47 -- # : 0 00:08:37.030 11:32:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:37.327 11:32:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:37.327 11:32:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.327 11:32:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.327 11:32:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.327 11:32:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:37.327 11:32:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:37.327 11:32:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:37.327 11:32:07 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:37.327 11:32:07 -- target/multitarget.sh@15 -- # nvmftestinit 00:08:37.327 11:32:07 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:37.327 11:32:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.327 11:32:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:37.327 11:32:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:37.327 11:32:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:37.327 11:32:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.327 11:32:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.327 11:32:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.327 11:32:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:37.327 11:32:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:37.327 11:32:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:37.327 11:32:07 -- common/autotest_common.sh@10 -- # set +x 00:08:43.900 11:32:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:43.900 11:32:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:43.900 11:32:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:43.900 11:32:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:43.900 11:32:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:43.900 11:32:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:43.900 11:32:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:43.900 11:32:13 -- nvmf/common.sh@295 -- # net_devs=() 00:08:43.900 11:32:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:43.900 11:32:13 -- nvmf/common.sh@296 -- # e810=() 00:08:43.900 11:32:13 -- nvmf/common.sh@296 -- # local -ga e810 00:08:43.900 11:32:13 -- nvmf/common.sh@297 -- # x722=() 00:08:43.900 11:32:13 -- nvmf/common.sh@297 -- # local -ga x722 00:08:43.900 11:32:13 -- nvmf/common.sh@298 -- # mlx=() 00:08:43.900 11:32:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:43.900 11:32:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.900 11:32:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.900 11:32:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.900 11:32:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.900 11:32:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.900 11:32:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.900 11:32:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.900 11:32:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.900 11:32:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.900 11:32:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.900 11:32:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.900 11:32:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:43.900 11:32:13 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:43.900 11:32:13 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:43.900 11:32:13 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:43.900 11:32:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:43.900 11:32:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.900 11:32:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:43.900 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:43.900 11:32:13 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:43.900 11:32:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.900 11:32:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:43.900 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:43.900 11:32:13 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:43.900 11:32:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:43.900 11:32:13 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.900 11:32:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.900 11:32:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:43.900 11:32:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.900 11:32:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:43.900 Found net devices under 0000:18:00.0: mlx_0_0 00:08:43.900 11:32:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.900 11:32:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.900 11:32:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.900 11:32:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:43.900 11:32:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.900 11:32:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:43.900 Found net devices under 0000:18:00.1: mlx_0_1 00:08:43.900 11:32:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.900 11:32:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:43.900 11:32:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:43.900 11:32:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:43.900 11:32:13 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:43.900 11:32:13 -- nvmf/common.sh@58 -- # uname 00:08:43.900 11:32:13 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:43.900 11:32:13 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:43.900 11:32:13 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:43.900 11:32:13 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:43.900 11:32:13 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:43.900 11:32:13 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:43.900 11:32:13 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:43.900 11:32:13 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:43.900 11:32:13 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:43.900 11:32:13 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:43.900 11:32:13 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:43.900 11:32:13 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:43.900 11:32:13 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:43.900 11:32:13 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:43.900 11:32:13 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:43.900 11:32:13 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:43.900 11:32:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:43.900 11:32:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.900 11:32:13 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:43.900 11:32:13 -- nvmf/common.sh@105 -- # continue 2 00:08:43.900 11:32:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:43.900 11:32:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.900 11:32:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.900 11:32:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:43.900 11:32:13 -- nvmf/common.sh@105 -- # continue 2 00:08:43.900 11:32:13 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:43.900 11:32:13 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:43.900 11:32:13 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:43.900 11:32:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:43.900 11:32:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:43.900 11:32:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:43.900 11:32:13 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:43.900 11:32:13 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:43.900 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:43.900 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:08:43.900 altname enp24s0f0np0 00:08:43.900 altname ens785f0np0 00:08:43.900 inet 192.168.100.8/24 scope global mlx_0_0 00:08:43.900 valid_lft forever preferred_lft forever 00:08:43.900 11:32:13 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:43.900 11:32:13 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:43.900 11:32:13 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:43.900 11:32:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:43.900 11:32:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:43.900 11:32:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:43.900 11:32:13 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:43.900 11:32:13 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:43.900 11:32:13 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:43.900 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:43.900 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:08:43.901 altname enp24s0f1np1 00:08:43.901 altname ens785f1np1 00:08:43.901 inet 192.168.100.9/24 scope global mlx_0_1 00:08:43.901 valid_lft forever preferred_lft forever 00:08:43.901 11:32:13 -- nvmf/common.sh@411 -- # return 0 00:08:43.901 11:32:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:43.901 11:32:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:43.901 11:32:13 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:43.901 11:32:13 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:43.901 11:32:13 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:43.901 11:32:13 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:43.901 11:32:13 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:43.901 11:32:13 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:43.901 11:32:13 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:43.901 11:32:13 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:43.901 11:32:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:43.901 11:32:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.901 11:32:13 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:43.901 11:32:13 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:43.901 11:32:13 -- nvmf/common.sh@105 -- # continue 2 00:08:43.901 11:32:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:43.901 11:32:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.901 11:32:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:43.901 11:32:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.901 11:32:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:43.901 11:32:13 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:43.901 11:32:13 -- nvmf/common.sh@105 -- # continue 2 00:08:43.901 11:32:13 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:43.901 11:32:13 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:43.901 11:32:13 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:43.901 11:32:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:43.901 11:32:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:43.901 11:32:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:43.901 11:32:13 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:43.901 11:32:13 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:43.901 11:32:13 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:43.901 11:32:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:43.901 11:32:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:43.901 11:32:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:43.901 11:32:13 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:43.901 192.168.100.9' 00:08:43.901 11:32:13 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:43.901 192.168.100.9' 00:08:43.901 11:32:13 -- nvmf/common.sh@446 -- # head -n 1 00:08:43.901 11:32:13 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:43.901 11:32:13 -- nvmf/common.sh@447 -- # tail -n +2 00:08:43.901 11:32:13 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:43.901 192.168.100.9' 00:08:43.901 11:32:13 -- nvmf/common.sh@447 -- # head -n 1 00:08:43.901 11:32:13 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:43.901 11:32:13 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:43.901 11:32:13 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:43.901 11:32:13 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:43.901 11:32:13 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:43.901 11:32:13 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:43.901 11:32:13 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:43.901 11:32:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:43.901 11:32:13 -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:43.901 11:32:13 -- common/autotest_common.sh@10 -- # set +x 00:08:43.901 11:32:13 -- nvmf/common.sh@470 -- # nvmfpid=2932456 00:08:43.901 11:32:13 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.901 11:32:13 -- nvmf/common.sh@471 -- # waitforlisten 2932456 00:08:43.901 11:32:13 -- common/autotest_common.sh@827 -- # '[' -z 2932456 ']' 00:08:43.901 11:32:13 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.901 11:32:13 -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:43.901 11:32:13 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.901 11:32:13 -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:43.901 11:32:13 -- common/autotest_common.sh@10 -- # set +x 00:08:43.901 [2024-05-15 11:32:13.670582] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:08:43.901 [2024-05-15 11:32:13.670640] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.901 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.901 [2024-05-15 11:32:13.743938] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.901 [2024-05-15 11:32:13.839525] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.901 [2024-05-15 11:32:13.839569] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.901 [2024-05-15 11:32:13.839580] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.901 [2024-05-15 11:32:13.839590] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.901 [2024-05-15 11:32:13.839602] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.901 [2024-05-15 11:32:13.839649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.901 [2024-05-15 11:32:13.839733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.901 [2024-05-15 11:32:13.839819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.901 [2024-05-15 11:32:13.839821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.901 11:32:14 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:43.901 11:32:14 -- common/autotest_common.sh@860 -- # return 0 00:08:43.901 11:32:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:43.901 11:32:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.901 11:32:14 -- common/autotest_common.sh@10 -- # set +x 00:08:43.901 11:32:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.901 11:32:14 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:43.901 11:32:14 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:43.901 11:32:14 -- target/multitarget.sh@21 -- # jq length 00:08:43.901 11:32:14 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:43.901 11:32:14 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:44.160 "nvmf_tgt_1" 00:08:44.161 11:32:14 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:44.161 "nvmf_tgt_2" 00:08:44.161 11:32:14 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:44.161 11:32:14 -- target/multitarget.sh@28 -- # jq length 00:08:44.419 11:32:14 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:44.419 11:32:14 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:44.419 true 00:08:44.419 11:32:15 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:44.419 true 00:08:44.419 11:32:15 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:44.419 11:32:15 -- target/multitarget.sh@35 -- # jq length 00:08:44.727 11:32:15 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:44.727 11:32:15 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:44.727 11:32:15 -- target/multitarget.sh@41 -- # nvmftestfini 00:08:44.727 11:32:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:44.727 11:32:15 -- nvmf/common.sh@117 -- # sync 00:08:44.727 11:32:15 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:44.727 11:32:15 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:44.727 11:32:15 -- nvmf/common.sh@120 -- # set +e 00:08:44.727 11:32:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:44.727 11:32:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:44.727 rmmod nvme_rdma 00:08:44.727 rmmod nvme_fabrics 00:08:44.727 11:32:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:44.727 11:32:15 -- nvmf/common.sh@124 -- # set -e 00:08:44.727 11:32:15 -- nvmf/common.sh@125 -- # return 0 00:08:44.727 11:32:15 -- nvmf/common.sh@478 -- # '[' -n 2932456 ']' 00:08:44.727 11:32:15 -- nvmf/common.sh@479 -- # killprocess 2932456 00:08:44.727 11:32:15 -- common/autotest_common.sh@946 -- # '[' -z 2932456 ']' 00:08:44.727 11:32:15 -- common/autotest_common.sh@950 -- # kill -0 2932456 00:08:44.727 11:32:15 -- common/autotest_common.sh@951 -- # uname 00:08:44.727 11:32:15 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:44.727 11:32:15 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2932456 00:08:44.727 11:32:15 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:44.727 11:32:15 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:44.727 11:32:15 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2932456' 00:08:44.727 killing process with pid 2932456 00:08:44.727 11:32:15 -- common/autotest_common.sh@965 -- # kill 2932456 00:08:44.727 11:32:15 -- common/autotest_common.sh@970 -- # wait 2932456 00:08:44.986 11:32:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:44.986 11:32:15 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:44.986 00:08:44.986 real 0m7.884s 00:08:44.986 user 0m9.175s 00:08:44.986 sys 0m4.960s 00:08:44.986 11:32:15 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:44.986 11:32:15 -- common/autotest_common.sh@10 -- # set +x 00:08:44.986 ************************************ 00:08:44.986 END TEST nvmf_multitarget 00:08:44.986 ************************************ 00:08:44.986 11:32:15 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:08:44.986 11:32:15 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:44.986 11:32:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:44.986 11:32:15 -- common/autotest_common.sh@10 -- # set +x 00:08:44.986 ************************************ 00:08:44.986 START TEST nvmf_rpc 00:08:44.986 ************************************ 00:08:44.986 11:32:15 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:08:44.986 * Looking for test storage... 00:08:44.986 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:44.986 11:32:15 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.986 11:32:15 -- nvmf/common.sh@7 -- # uname -s 00:08:44.986 11:32:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.986 11:32:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.246 11:32:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.246 11:32:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.246 11:32:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.246 11:32:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.246 11:32:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.246 11:32:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.246 11:32:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.246 11:32:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.246 11:32:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:45.246 11:32:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:08:45.246 11:32:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.246 11:32:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.246 11:32:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.246 11:32:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.246 11:32:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:45.246 11:32:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.246 11:32:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.246 11:32:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.246 11:32:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.246 11:32:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.246 11:32:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.246 11:32:15 -- paths/export.sh@5 -- # export PATH 00:08:45.246 11:32:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.246 11:32:15 -- nvmf/common.sh@47 -- # : 0 00:08:45.246 11:32:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:45.246 11:32:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:45.246 11:32:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.246 11:32:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.246 11:32:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.246 11:32:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:45.246 11:32:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:45.246 11:32:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:45.246 11:32:15 -- target/rpc.sh@11 -- # loops=5 00:08:45.246 11:32:15 -- target/rpc.sh@23 -- # nvmftestinit 00:08:45.246 11:32:15 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:45.246 11:32:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.246 11:32:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:45.246 11:32:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:45.246 11:32:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:45.246 11:32:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.246 11:32:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.246 11:32:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.246 11:32:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:45.246 11:32:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:45.246 11:32:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:45.246 11:32:15 -- common/autotest_common.sh@10 -- # set +x 00:08:50.522 11:32:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:50.522 11:32:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:50.522 11:32:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:50.522 11:32:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:50.522 11:32:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:50.522 11:32:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:50.522 11:32:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:50.522 11:32:21 -- nvmf/common.sh@295 -- # net_devs=() 00:08:50.522 11:32:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:50.522 11:32:21 -- nvmf/common.sh@296 -- # e810=() 00:08:50.522 11:32:21 -- nvmf/common.sh@296 -- # local -ga e810 00:08:50.522 11:32:21 -- nvmf/common.sh@297 -- # x722=() 00:08:50.522 11:32:21 -- nvmf/common.sh@297 -- # local -ga x722 00:08:50.522 11:32:21 -- nvmf/common.sh@298 -- # mlx=() 00:08:50.522 11:32:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:50.523 11:32:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.523 11:32:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.523 11:32:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.523 11:32:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.523 11:32:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.523 11:32:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.523 11:32:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.523 11:32:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.523 11:32:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.523 11:32:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.523 11:32:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.523 11:32:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:50.523 11:32:21 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:50.523 11:32:21 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:50.523 11:32:21 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:50.523 11:32:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:50.523 11:32:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.523 11:32:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:50.523 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:50.523 11:32:21 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:50.523 11:32:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.523 11:32:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:50.523 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:50.523 11:32:21 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:50.523 11:32:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:50.523 11:32:21 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.523 11:32:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.523 11:32:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:50.523 11:32:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.523 11:32:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:50.523 Found net devices under 0000:18:00.0: mlx_0_0 00:08:50.523 11:32:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.523 11:32:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.523 11:32:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.523 11:32:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:50.523 11:32:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.523 11:32:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:50.523 Found net devices under 0000:18:00.1: mlx_0_1 00:08:50.523 11:32:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.523 11:32:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:50.523 11:32:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:50.523 11:32:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:50.523 11:32:21 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:50.523 11:32:21 -- nvmf/common.sh@58 -- # uname 00:08:50.523 11:32:21 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:50.523 11:32:21 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:50.523 11:32:21 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:50.523 11:32:21 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:50.523 11:32:21 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:50.523 11:32:21 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:50.523 11:32:21 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:50.523 11:32:21 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:50.523 11:32:21 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:50.523 11:32:21 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:50.523 11:32:21 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:50.523 11:32:21 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:50.523 11:32:21 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:50.523 11:32:21 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:50.523 11:32:21 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:50.523 11:32:21 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:50.523 11:32:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:50.523 11:32:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.523 11:32:21 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:50.523 11:32:21 -- nvmf/common.sh@105 -- # continue 2 00:08:50.523 11:32:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:50.523 11:32:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.523 11:32:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.523 11:32:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:50.523 11:32:21 -- nvmf/common.sh@105 -- # continue 2 00:08:50.523 11:32:21 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:50.523 11:32:21 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:50.523 11:32:21 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:50.523 11:32:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:50.523 11:32:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:50.523 11:32:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:50.523 11:32:21 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:50.523 11:32:21 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:50.523 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:50.523 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:08:50.523 altname enp24s0f0np0 00:08:50.523 altname ens785f0np0 00:08:50.523 inet 192.168.100.8/24 scope global mlx_0_0 00:08:50.523 valid_lft forever preferred_lft forever 00:08:50.523 11:32:21 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:50.523 11:32:21 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:50.523 11:32:21 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:50.523 11:32:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:50.523 11:32:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:50.523 11:32:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:50.523 11:32:21 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:50.523 11:32:21 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:50.523 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:50.523 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:08:50.523 altname enp24s0f1np1 00:08:50.523 altname ens785f1np1 00:08:50.523 inet 192.168.100.9/24 scope global mlx_0_1 00:08:50.523 valid_lft forever preferred_lft forever 00:08:50.523 11:32:21 -- nvmf/common.sh@411 -- # return 0 00:08:50.523 11:32:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:50.523 11:32:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:50.523 11:32:21 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:50.523 11:32:21 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:50.523 11:32:21 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:50.523 11:32:21 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:50.523 11:32:21 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:50.523 11:32:21 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:50.523 11:32:21 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:50.523 11:32:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:50.523 11:32:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.523 11:32:21 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:50.523 11:32:21 -- nvmf/common.sh@105 -- # continue 2 00:08:50.523 11:32:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:50.523 11:32:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.523 11:32:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.523 11:32:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:50.523 11:32:21 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:50.523 11:32:21 -- nvmf/common.sh@105 -- # continue 2 00:08:50.523 11:32:21 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:50.523 11:32:21 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:50.523 11:32:21 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:50.523 11:32:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:50.523 11:32:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:50.524 11:32:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:50.524 11:32:21 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:50.524 11:32:21 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:50.524 11:32:21 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:50.524 11:32:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:50.524 11:32:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:50.524 11:32:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:50.524 11:32:21 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:50.524 192.168.100.9' 00:08:50.524 11:32:21 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:50.524 192.168.100.9' 00:08:50.524 11:32:21 -- nvmf/common.sh@446 -- # head -n 1 00:08:50.524 11:32:21 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:50.524 11:32:21 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:50.524 192.168.100.9' 00:08:50.524 11:32:21 -- nvmf/common.sh@447 -- # tail -n +2 00:08:50.524 11:32:21 -- nvmf/common.sh@447 -- # head -n 1 00:08:50.524 11:32:21 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:50.524 11:32:21 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:50.524 11:32:21 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:50.524 11:32:21 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:50.524 11:32:21 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:50.524 11:32:21 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:50.524 11:32:21 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:50.524 11:32:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:50.524 11:32:21 -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:50.524 11:32:21 -- common/autotest_common.sh@10 -- # set +x 00:08:50.524 11:32:21 -- nvmf/common.sh@470 -- # nvmfpid=2935565 00:08:50.524 11:32:21 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:50.524 11:32:21 -- nvmf/common.sh@471 -- # waitforlisten 2935565 00:08:50.524 11:32:21 -- common/autotest_common.sh@827 -- # '[' -z 2935565 ']' 00:08:50.524 11:32:21 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.524 11:32:21 -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:50.524 11:32:21 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.524 11:32:21 -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:50.524 11:32:21 -- common/autotest_common.sh@10 -- # set +x 00:08:50.524 [2024-05-15 11:32:21.284487] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:08:50.524 [2024-05-15 11:32:21.284550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.783 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.783 [2024-05-15 11:32:21.358342] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.783 [2024-05-15 11:32:21.446515] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.783 [2024-05-15 11:32:21.446562] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.783 [2024-05-15 11:32:21.446571] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.783 [2024-05-15 11:32:21.446580] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.783 [2024-05-15 11:32:21.446587] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.783 [2024-05-15 11:32:21.446643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.783 [2024-05-15 11:32:21.446731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.783 [2024-05-15 11:32:21.446797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.783 [2024-05-15 11:32:21.446798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.350 11:32:22 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:51.350 11:32:22 -- common/autotest_common.sh@860 -- # return 0 00:08:51.350 11:32:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:51.350 11:32:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.350 11:32:22 -- common/autotest_common.sh@10 -- # set +x 00:08:51.610 11:32:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.610 11:32:22 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:51.610 11:32:22 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.610 11:32:22 -- common/autotest_common.sh@10 -- # set +x 00:08:51.610 11:32:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.610 11:32:22 -- target/rpc.sh@26 -- # stats='{ 00:08:51.610 "tick_rate": 2300000000, 00:08:51.610 "poll_groups": [ 00:08:51.610 { 00:08:51.610 "name": "nvmf_tgt_poll_group_000", 00:08:51.610 "admin_qpairs": 0, 00:08:51.610 "io_qpairs": 0, 00:08:51.610 "current_admin_qpairs": 0, 00:08:51.610 "current_io_qpairs": 0, 00:08:51.610 "pending_bdev_io": 0, 00:08:51.610 "completed_nvme_io": 0, 00:08:51.610 "transports": [] 00:08:51.610 }, 00:08:51.610 { 00:08:51.610 "name": "nvmf_tgt_poll_group_001", 00:08:51.610 "admin_qpairs": 0, 00:08:51.610 "io_qpairs": 0, 00:08:51.610 "current_admin_qpairs": 0, 00:08:51.610 "current_io_qpairs": 0, 00:08:51.610 "pending_bdev_io": 0, 00:08:51.610 "completed_nvme_io": 0, 00:08:51.610 "transports": [] 00:08:51.610 }, 00:08:51.610 { 00:08:51.610 "name": "nvmf_tgt_poll_group_002", 00:08:51.610 "admin_qpairs": 0, 00:08:51.610 "io_qpairs": 0, 00:08:51.610 "current_admin_qpairs": 0, 00:08:51.610 "current_io_qpairs": 0, 00:08:51.610 "pending_bdev_io": 0, 00:08:51.610 "completed_nvme_io": 0, 00:08:51.610 "transports": [] 00:08:51.610 }, 00:08:51.610 { 00:08:51.610 "name": "nvmf_tgt_poll_group_003", 00:08:51.610 "admin_qpairs": 0, 00:08:51.610 "io_qpairs": 0, 00:08:51.610 "current_admin_qpairs": 0, 00:08:51.610 "current_io_qpairs": 0, 00:08:51.610 "pending_bdev_io": 0, 00:08:51.610 "completed_nvme_io": 0, 00:08:51.610 "transports": [] 00:08:51.610 } 00:08:51.610 ] 00:08:51.610 }' 00:08:51.610 11:32:22 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:51.610 11:32:22 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:51.610 11:32:22 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:51.610 11:32:22 -- target/rpc.sh@15 -- # wc -l 00:08:51.610 11:32:22 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:51.610 11:32:22 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:51.610 11:32:22 -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:51.610 11:32:22 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:51.610 11:32:22 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.610 11:32:22 -- common/autotest_common.sh@10 -- # set +x 00:08:51.610 [2024-05-15 11:32:22.295934] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x151df10/0x1522400) succeed. 00:08:51.610 [2024-05-15 11:32:22.306448] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x151f550/0x1563a90) succeed. 00:08:51.869 11:32:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.869 11:32:22 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:51.869 11:32:22 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.869 11:32:22 -- common/autotest_common.sh@10 -- # set +x 00:08:51.869 11:32:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.869 11:32:22 -- target/rpc.sh@33 -- # stats='{ 00:08:51.869 "tick_rate": 2300000000, 00:08:51.869 "poll_groups": [ 00:08:51.869 { 00:08:51.869 "name": "nvmf_tgt_poll_group_000", 00:08:51.869 "admin_qpairs": 0, 00:08:51.869 "io_qpairs": 0, 00:08:51.869 "current_admin_qpairs": 0, 00:08:51.869 "current_io_qpairs": 0, 00:08:51.869 "pending_bdev_io": 0, 00:08:51.869 "completed_nvme_io": 0, 00:08:51.869 "transports": [ 00:08:51.869 { 00:08:51.869 "trtype": "RDMA", 00:08:51.869 "pending_data_buffer": 0, 00:08:51.869 "devices": [ 00:08:51.869 { 00:08:51.869 "name": "mlx5_0", 00:08:51.869 "polls": 16341, 00:08:51.869 "idle_polls": 16341, 00:08:51.869 "completions": 0, 00:08:51.869 "requests": 0, 00:08:51.869 "request_latency": 0, 00:08:51.869 "pending_free_request": 0, 00:08:51.869 "pending_rdma_read": 0, 00:08:51.869 "pending_rdma_write": 0, 00:08:51.869 "pending_rdma_send": 0, 00:08:51.869 "total_send_wrs": 0, 00:08:51.869 "send_doorbell_updates": 0, 00:08:51.869 "total_recv_wrs": 4096, 00:08:51.869 "recv_doorbell_updates": 1 00:08:51.869 }, 00:08:51.869 { 00:08:51.869 "name": "mlx5_1", 00:08:51.869 "polls": 16341, 00:08:51.869 "idle_polls": 16341, 00:08:51.869 "completions": 0, 00:08:51.869 "requests": 0, 00:08:51.869 "request_latency": 0, 00:08:51.869 "pending_free_request": 0, 00:08:51.869 "pending_rdma_read": 0, 00:08:51.869 "pending_rdma_write": 0, 00:08:51.869 "pending_rdma_send": 0, 00:08:51.869 "total_send_wrs": 0, 00:08:51.869 "send_doorbell_updates": 0, 00:08:51.869 "total_recv_wrs": 4096, 00:08:51.869 "recv_doorbell_updates": 1 00:08:51.869 } 00:08:51.869 ] 00:08:51.869 } 00:08:51.869 ] 00:08:51.869 }, 00:08:51.869 { 00:08:51.869 "name": "nvmf_tgt_poll_group_001", 00:08:51.869 "admin_qpairs": 0, 00:08:51.869 "io_qpairs": 0, 00:08:51.869 "current_admin_qpairs": 0, 00:08:51.869 "current_io_qpairs": 0, 00:08:51.869 "pending_bdev_io": 0, 00:08:51.869 "completed_nvme_io": 0, 00:08:51.869 "transports": [ 00:08:51.869 { 00:08:51.869 "trtype": "RDMA", 00:08:51.869 "pending_data_buffer": 0, 00:08:51.869 "devices": [ 00:08:51.869 { 00:08:51.869 "name": "mlx5_0", 00:08:51.869 "polls": 10400, 00:08:51.869 "idle_polls": 10400, 00:08:51.869 "completions": 0, 00:08:51.869 "requests": 0, 00:08:51.869 "request_latency": 0, 00:08:51.869 "pending_free_request": 0, 00:08:51.869 "pending_rdma_read": 0, 00:08:51.869 "pending_rdma_write": 0, 00:08:51.869 "pending_rdma_send": 0, 00:08:51.869 "total_send_wrs": 0, 00:08:51.869 "send_doorbell_updates": 0, 00:08:51.869 "total_recv_wrs": 4096, 00:08:51.869 "recv_doorbell_updates": 1 00:08:51.869 }, 00:08:51.869 { 00:08:51.869 "name": "mlx5_1", 00:08:51.869 "polls": 10400, 00:08:51.869 "idle_polls": 10400, 00:08:51.869 "completions": 0, 00:08:51.869 "requests": 0, 00:08:51.869 "request_latency": 0, 00:08:51.869 "pending_free_request": 0, 00:08:51.869 "pending_rdma_read": 0, 00:08:51.869 "pending_rdma_write": 0, 00:08:51.869 "pending_rdma_send": 0, 00:08:51.869 "total_send_wrs": 0, 00:08:51.869 "send_doorbell_updates": 0, 00:08:51.869 "total_recv_wrs": 4096, 00:08:51.869 "recv_doorbell_updates": 1 00:08:51.869 } 00:08:51.869 ] 00:08:51.869 } 00:08:51.869 ] 00:08:51.869 }, 00:08:51.869 { 00:08:51.869 "name": "nvmf_tgt_poll_group_002", 00:08:51.869 "admin_qpairs": 0, 00:08:51.869 "io_qpairs": 0, 00:08:51.869 "current_admin_qpairs": 0, 00:08:51.869 "current_io_qpairs": 0, 00:08:51.869 "pending_bdev_io": 0, 00:08:51.869 "completed_nvme_io": 0, 00:08:51.869 "transports": [ 00:08:51.869 { 00:08:51.869 "trtype": "RDMA", 00:08:51.869 "pending_data_buffer": 0, 00:08:51.869 "devices": [ 00:08:51.869 { 00:08:51.869 "name": "mlx5_0", 00:08:51.869 "polls": 5656, 00:08:51.869 "idle_polls": 5656, 00:08:51.869 "completions": 0, 00:08:51.869 "requests": 0, 00:08:51.869 "request_latency": 0, 00:08:51.869 "pending_free_request": 0, 00:08:51.869 "pending_rdma_read": 0, 00:08:51.869 "pending_rdma_write": 0, 00:08:51.869 "pending_rdma_send": 0, 00:08:51.869 "total_send_wrs": 0, 00:08:51.869 "send_doorbell_updates": 0, 00:08:51.869 "total_recv_wrs": 4096, 00:08:51.869 "recv_doorbell_updates": 1 00:08:51.869 }, 00:08:51.869 { 00:08:51.869 "name": "mlx5_1", 00:08:51.869 "polls": 5656, 00:08:51.869 "idle_polls": 5656, 00:08:51.869 "completions": 0, 00:08:51.869 "requests": 0, 00:08:51.869 "request_latency": 0, 00:08:51.869 "pending_free_request": 0, 00:08:51.869 "pending_rdma_read": 0, 00:08:51.869 "pending_rdma_write": 0, 00:08:51.869 "pending_rdma_send": 0, 00:08:51.869 "total_send_wrs": 0, 00:08:51.869 "send_doorbell_updates": 0, 00:08:51.869 "total_recv_wrs": 4096, 00:08:51.869 "recv_doorbell_updates": 1 00:08:51.869 } 00:08:51.869 ] 00:08:51.869 } 00:08:51.869 ] 00:08:51.869 }, 00:08:51.869 { 00:08:51.869 "name": "nvmf_tgt_poll_group_003", 00:08:51.869 "admin_qpairs": 0, 00:08:51.869 "io_qpairs": 0, 00:08:51.869 "current_admin_qpairs": 0, 00:08:51.869 "current_io_qpairs": 0, 00:08:51.869 "pending_bdev_io": 0, 00:08:51.869 "completed_nvme_io": 0, 00:08:51.869 "transports": [ 00:08:51.870 { 00:08:51.870 "trtype": "RDMA", 00:08:51.870 "pending_data_buffer": 0, 00:08:51.870 "devices": [ 00:08:51.870 { 00:08:51.870 "name": "mlx5_0", 00:08:51.870 "polls": 871, 00:08:51.870 "idle_polls": 871, 00:08:51.870 "completions": 0, 00:08:51.870 "requests": 0, 00:08:51.870 "request_latency": 0, 00:08:51.870 "pending_free_request": 0, 00:08:51.870 "pending_rdma_read": 0, 00:08:51.870 "pending_rdma_write": 0, 00:08:51.870 "pending_rdma_send": 0, 00:08:51.870 "total_send_wrs": 0, 00:08:51.870 "send_doorbell_updates": 0, 00:08:51.870 "total_recv_wrs": 4096, 00:08:51.870 "recv_doorbell_updates": 1 00:08:51.870 }, 00:08:51.870 { 00:08:51.870 "name": "mlx5_1", 00:08:51.870 "polls": 871, 00:08:51.870 "idle_polls": 871, 00:08:51.870 "completions": 0, 00:08:51.870 "requests": 0, 00:08:51.870 "request_latency": 0, 00:08:51.870 "pending_free_request": 0, 00:08:51.870 "pending_rdma_read": 0, 00:08:51.870 "pending_rdma_write": 0, 00:08:51.870 "pending_rdma_send": 0, 00:08:51.870 "total_send_wrs": 0, 00:08:51.870 "send_doorbell_updates": 0, 00:08:51.870 "total_recv_wrs": 4096, 00:08:51.870 "recv_doorbell_updates": 1 00:08:51.870 } 00:08:51.870 ] 00:08:51.870 } 00:08:51.870 ] 00:08:51.870 } 00:08:51.870 ] 00:08:51.870 }' 00:08:51.870 11:32:22 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:51.870 11:32:22 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:51.870 11:32:22 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:51.870 11:32:22 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:51.870 11:32:22 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:51.870 11:32:22 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:51.870 11:32:22 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:51.870 11:32:22 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:51.870 11:32:22 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:51.870 11:32:22 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:51.870 11:32:22 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:08:51.870 11:32:22 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:08:51.870 11:32:22 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:08:51.870 11:32:22 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:08:51.870 11:32:22 -- target/rpc.sh@15 -- # wc -l 00:08:51.870 11:32:22 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:08:51.870 11:32:22 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:08:52.129 11:32:22 -- target/rpc.sh@41 -- # transport_type=RDMA 00:08:52.129 11:32:22 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:08:52.129 11:32:22 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:08:52.129 11:32:22 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:08:52.129 11:32:22 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:08:52.129 11:32:22 -- target/rpc.sh@15 -- # wc -l 00:08:52.129 11:32:22 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:08:52.129 11:32:22 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:52.129 11:32:22 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:52.129 11:32:22 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:52.129 11:32:22 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.129 11:32:22 -- common/autotest_common.sh@10 -- # set +x 00:08:52.129 Malloc1 00:08:52.129 11:32:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.129 11:32:22 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:52.129 11:32:22 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.129 11:32:22 -- common/autotest_common.sh@10 -- # set +x 00:08:52.129 11:32:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.129 11:32:22 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:52.129 11:32:22 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.129 11:32:22 -- common/autotest_common.sh@10 -- # set +x 00:08:52.129 11:32:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.129 11:32:22 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:52.129 11:32:22 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.129 11:32:22 -- common/autotest_common.sh@10 -- # set +x 00:08:52.129 11:32:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.129 11:32:22 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:52.129 11:32:22 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.129 11:32:22 -- common/autotest_common.sh@10 -- # set +x 00:08:52.129 [2024-05-15 11:32:22.731986] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:52.129 [2024-05-15 11:32:22.732374] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:52.129 11:32:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.129 11:32:22 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:08:52.129 11:32:22 -- common/autotest_common.sh@648 -- # local es=0 00:08:52.129 11:32:22 -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:08:52.129 11:32:22 -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:52.129 11:32:22 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:52.130 11:32:22 -- common/autotest_common.sh@640 -- # type -t nvme 00:08:52.130 11:32:22 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:52.130 11:32:22 -- common/autotest_common.sh@642 -- # type -P nvme 00:08:52.130 11:32:22 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:52.130 11:32:22 -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:52.130 11:32:22 -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:52.130 11:32:22 -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:08:52.130 [2024-05-15 11:32:22.778187] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562' 00:08:52.130 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:52.130 could not add new controller: failed to write to nvme-fabrics device 00:08:52.130 11:32:22 -- common/autotest_common.sh@651 -- # es=1 00:08:52.130 11:32:22 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:52.130 11:32:22 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:52.130 11:32:22 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:52.130 11:32:22 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:52.130 11:32:22 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.130 11:32:22 -- common/autotest_common.sh@10 -- # set +x 00:08:52.130 11:32:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.130 11:32:22 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:53.066 11:32:23 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:53.066 11:32:23 -- common/autotest_common.sh@1194 -- # local i=0 00:08:53.066 11:32:23 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:53.066 11:32:23 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:53.066 11:32:23 -- common/autotest_common.sh@1201 -- # sleep 2 00:08:55.648 11:32:25 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:55.648 11:32:25 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:55.648 11:32:25 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:55.648 11:32:25 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:55.648 11:32:25 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:55.648 11:32:25 -- common/autotest_common.sh@1204 -- # return 0 00:08:55.648 11:32:25 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:56.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.216 11:32:26 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:56.216 11:32:26 -- common/autotest_common.sh@1215 -- # local i=0 00:08:56.216 11:32:26 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:56.216 11:32:26 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.216 11:32:26 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:56.217 11:32:26 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.217 11:32:26 -- common/autotest_common.sh@1227 -- # return 0 00:08:56.217 11:32:26 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:56.217 11:32:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.217 11:32:26 -- common/autotest_common.sh@10 -- # set +x 00:08:56.217 11:32:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.217 11:32:26 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:56.217 11:32:26 -- common/autotest_common.sh@648 -- # local es=0 00:08:56.217 11:32:26 -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:56.217 11:32:26 -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:56.217 11:32:26 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.217 11:32:26 -- common/autotest_common.sh@640 -- # type -t nvme 00:08:56.217 11:32:26 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.217 11:32:26 -- common/autotest_common.sh@642 -- # type -P nvme 00:08:56.217 11:32:26 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.217 11:32:26 -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:56.217 11:32:26 -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:56.217 11:32:26 -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:56.217 [2024-05-15 11:32:26.850075] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562' 00:08:56.217 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:56.217 could not add new controller: failed to write to nvme-fabrics device 00:08:56.217 11:32:26 -- common/autotest_common.sh@651 -- # es=1 00:08:56.217 11:32:26 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:56.217 11:32:26 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:56.217 11:32:26 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:56.217 11:32:26 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:56.217 11:32:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.217 11:32:26 -- common/autotest_common.sh@10 -- # set +x 00:08:56.217 11:32:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.217 11:32:26 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:57.154 11:32:27 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:57.154 11:32:27 -- common/autotest_common.sh@1194 -- # local i=0 00:08:57.154 11:32:27 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:57.154 11:32:27 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:57.154 11:32:27 -- common/autotest_common.sh@1201 -- # sleep 2 00:08:59.691 11:32:29 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:59.691 11:32:29 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:59.691 11:32:29 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:59.691 11:32:29 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:59.691 11:32:29 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:59.691 11:32:29 -- common/autotest_common.sh@1204 -- # return 0 00:08:59.691 11:32:29 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:00.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.259 11:32:30 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:00.259 11:32:30 -- common/autotest_common.sh@1215 -- # local i=0 00:09:00.259 11:32:30 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:00.259 11:32:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.259 11:32:30 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:00.259 11:32:30 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.259 11:32:30 -- common/autotest_common.sh@1227 -- # return 0 00:09:00.259 11:32:30 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.259 11:32:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.259 11:32:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.259 11:32:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.259 11:32:30 -- target/rpc.sh@81 -- # seq 1 5 00:09:00.259 11:32:30 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:00.259 11:32:30 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:00.259 11:32:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.259 11:32:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.259 11:32:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.259 11:32:30 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:00.259 11:32:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.259 11:32:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.259 [2024-05-15 11:32:30.900469] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:00.259 11:32:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.259 11:32:30 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:00.259 11:32:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.259 11:32:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.259 11:32:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.259 11:32:30 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:00.259 11:32:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.259 11:32:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.259 11:32:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.259 11:32:30 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:01.197 11:32:31 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:01.197 11:32:31 -- common/autotest_common.sh@1194 -- # local i=0 00:09:01.197 11:32:31 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:01.197 11:32:31 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:01.197 11:32:31 -- common/autotest_common.sh@1201 -- # sleep 2 00:09:03.730 11:32:33 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:03.730 11:32:33 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:03.730 11:32:33 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:03.730 11:32:33 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:03.730 11:32:33 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:03.730 11:32:33 -- common/autotest_common.sh@1204 -- # return 0 00:09:03.730 11:32:33 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:04.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.297 11:32:34 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:04.297 11:32:34 -- common/autotest_common.sh@1215 -- # local i=0 00:09:04.297 11:32:34 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:04.297 11:32:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.297 11:32:34 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:04.297 11:32:34 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.297 11:32:34 -- common/autotest_common.sh@1227 -- # return 0 00:09:04.297 11:32:34 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:04.297 11:32:34 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.297 11:32:34 -- common/autotest_common.sh@10 -- # set +x 00:09:04.297 11:32:34 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.297 11:32:34 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:04.297 11:32:34 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.297 11:32:34 -- common/autotest_common.sh@10 -- # set +x 00:09:04.297 11:32:34 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.297 11:32:34 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:04.297 11:32:34 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:04.297 11:32:34 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.298 11:32:34 -- common/autotest_common.sh@10 -- # set +x 00:09:04.298 11:32:34 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.298 11:32:34 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:04.298 11:32:34 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.298 11:32:34 -- common/autotest_common.sh@10 -- # set +x 00:09:04.298 [2024-05-15 11:32:34.942140] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:04.298 11:32:34 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.298 11:32:34 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:04.298 11:32:34 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.298 11:32:34 -- common/autotest_common.sh@10 -- # set +x 00:09:04.298 11:32:34 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.298 11:32:34 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:04.298 11:32:34 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.298 11:32:34 -- common/autotest_common.sh@10 -- # set +x 00:09:04.298 11:32:34 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.298 11:32:34 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:05.234 11:32:35 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:05.234 11:32:35 -- common/autotest_common.sh@1194 -- # local i=0 00:09:05.234 11:32:35 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.234 11:32:35 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:05.234 11:32:35 -- common/autotest_common.sh@1201 -- # sleep 2 00:09:07.771 11:32:37 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:07.771 11:32:37 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:07.771 11:32:37 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.771 11:32:37 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:07.771 11:32:37 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.771 11:32:37 -- common/autotest_common.sh@1204 -- # return 0 00:09:07.771 11:32:37 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.341 11:32:38 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.341 11:32:38 -- common/autotest_common.sh@1215 -- # local i=0 00:09:08.341 11:32:38 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:08.341 11:32:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.341 11:32:38 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:08.341 11:32:38 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.341 11:32:38 -- common/autotest_common.sh@1227 -- # return 0 00:09:08.341 11:32:38 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:08.341 11:32:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.341 11:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:08.341 11:32:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.341 11:32:38 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.341 11:32:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.341 11:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:08.341 11:32:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.341 11:32:38 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:08.341 11:32:38 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:08.341 11:32:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.341 11:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:08.341 11:32:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.341 11:32:38 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:08.341 11:32:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.341 11:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:08.341 [2024-05-15 11:32:38.976379] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:08.341 11:32:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.341 11:32:38 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:08.341 11:32:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.341 11:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:08.341 11:32:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.341 11:32:38 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:08.341 11:32:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.341 11:32:38 -- common/autotest_common.sh@10 -- # set +x 00:09:08.341 11:32:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.341 11:32:38 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:09.279 11:32:39 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:09.279 11:32:39 -- common/autotest_common.sh@1194 -- # local i=0 00:09:09.279 11:32:39 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.279 11:32:39 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:09.279 11:32:39 -- common/autotest_common.sh@1201 -- # sleep 2 00:09:11.814 11:32:41 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:11.814 11:32:41 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:11.814 11:32:41 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.814 11:32:41 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:11.814 11:32:41 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.814 11:32:41 -- common/autotest_common.sh@1204 -- # return 0 00:09:11.814 11:32:41 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:12.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.383 11:32:42 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:12.383 11:32:42 -- common/autotest_common.sh@1215 -- # local i=0 00:09:12.383 11:32:42 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:12.383 11:32:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:12.383 11:32:42 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:12.383 11:32:42 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:12.383 11:32:42 -- common/autotest_common.sh@1227 -- # return 0 00:09:12.383 11:32:42 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:12.383 11:32:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.383 11:32:42 -- common/autotest_common.sh@10 -- # set +x 00:09:12.383 11:32:42 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.383 11:32:42 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:12.383 11:32:42 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.383 11:32:42 -- common/autotest_common.sh@10 -- # set +x 00:09:12.383 11:32:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.383 11:32:43 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:12.383 11:32:43 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:12.383 11:32:43 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.383 11:32:43 -- common/autotest_common.sh@10 -- # set +x 00:09:12.383 11:32:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.383 11:32:43 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:12.383 11:32:43 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.383 11:32:43 -- common/autotest_common.sh@10 -- # set +x 00:09:12.383 [2024-05-15 11:32:43.015449] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:12.383 11:32:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.383 11:32:43 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:12.383 11:32:43 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.383 11:32:43 -- common/autotest_common.sh@10 -- # set +x 00:09:12.383 11:32:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.383 11:32:43 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:12.383 11:32:43 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.383 11:32:43 -- common/autotest_common.sh@10 -- # set +x 00:09:12.383 11:32:43 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.383 11:32:43 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:13.326 11:32:43 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:13.326 11:32:43 -- common/autotest_common.sh@1194 -- # local i=0 00:09:13.326 11:32:43 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:13.326 11:32:43 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:13.326 11:32:43 -- common/autotest_common.sh@1201 -- # sleep 2 00:09:15.863 11:32:45 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:15.863 11:32:46 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:15.863 11:32:46 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:15.863 11:32:46 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:15.863 11:32:46 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:15.863 11:32:46 -- common/autotest_common.sh@1204 -- # return 0 00:09:15.863 11:32:46 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:16.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.432 11:32:46 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:16.432 11:32:46 -- common/autotest_common.sh@1215 -- # local i=0 00:09:16.432 11:32:46 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:16.432 11:32:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.432 11:32:47 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:16.432 11:32:47 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.432 11:32:47 -- common/autotest_common.sh@1227 -- # return 0 00:09:16.432 11:32:47 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.432 11:32:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.432 11:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:16.432 11:32:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.432 11:32:47 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.432 11:32:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.432 11:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:16.432 11:32:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.432 11:32:47 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:16.432 11:32:47 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:16.432 11:32:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.432 11:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:16.432 11:32:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.432 11:32:47 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:16.432 11:32:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.432 11:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:16.432 [2024-05-15 11:32:47.055935] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:16.432 11:32:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.432 11:32:47 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:16.432 11:32:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.432 11:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:16.432 11:32:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.432 11:32:47 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:16.432 11:32:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.432 11:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:16.432 11:32:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.432 11:32:47 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:17.370 11:32:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:17.370 11:32:48 -- common/autotest_common.sh@1194 -- # local i=0 00:09:17.370 11:32:48 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.370 11:32:48 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:17.370 11:32:48 -- common/autotest_common.sh@1201 -- # sleep 2 00:09:19.376 11:32:50 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:19.376 11:32:50 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:19.376 11:32:50 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.376 11:32:50 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:19.376 11:32:50 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.376 11:32:50 -- common/autotest_common.sh@1204 -- # return 0 00:09:19.376 11:32:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.313 11:32:51 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.313 11:32:51 -- common/autotest_common.sh@1215 -- # local i=0 00:09:20.313 11:32:51 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:20.313 11:32:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.313 11:32:51 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:20.313 11:32:51 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.313 11:32:51 -- common/autotest_common.sh@1227 -- # return 0 00:09:20.313 11:32:51 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.313 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.313 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.313 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.313 11:32:51 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.313 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.313 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.313 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.313 11:32:51 -- target/rpc.sh@99 -- # seq 1 5 00:09:20.573 11:32:51 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:20.573 11:32:51 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.573 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.573 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.573 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.573 11:32:51 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:20.573 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.573 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.573 [2024-05-15 11:32:51.099278] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:20.573 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.573 11:32:51 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.573 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.573 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.573 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.573 11:32:51 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.573 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.573 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.573 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.573 11:32:51 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.573 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.573 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.573 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.573 11:32:51 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.573 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.573 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.573 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.573 11:32:51 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:20.573 11:32:51 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.573 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.573 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.573 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.573 11:32:51 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:20.573 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.573 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.573 [2024-05-15 11:32:51.147662] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:20.573 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.573 11:32:51 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.573 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.573 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.573 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.573 11:32:51 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.573 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.573 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:20.574 11:32:51 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 [2024-05-15 11:32:51.195855] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:20.574 11:32:51 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 [2024-05-15 11:32:51.248036] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:20.574 11:32:51 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 [2024-05-15 11:32:51.296202] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.574 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.574 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.574 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.574 11:32:51 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:20.833 11:32:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.833 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.833 11:32:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.833 11:32:51 -- target/rpc.sh@110 -- # stats='{ 00:09:20.833 "tick_rate": 2300000000, 00:09:20.833 "poll_groups": [ 00:09:20.833 { 00:09:20.833 "name": "nvmf_tgt_poll_group_000", 00:09:20.833 "admin_qpairs": 2, 00:09:20.833 "io_qpairs": 27, 00:09:20.833 "current_admin_qpairs": 0, 00:09:20.833 "current_io_qpairs": 0, 00:09:20.833 "pending_bdev_io": 0, 00:09:20.833 "completed_nvme_io": 129, 00:09:20.833 "transports": [ 00:09:20.833 { 00:09:20.833 "trtype": "RDMA", 00:09:20.833 "pending_data_buffer": 0, 00:09:20.833 "devices": [ 00:09:20.833 { 00:09:20.833 "name": "mlx5_0", 00:09:20.833 "polls": 3428340, 00:09:20.833 "idle_polls": 3428007, 00:09:20.834 "completions": 371, 00:09:20.834 "requests": 185, 00:09:20.834 "request_latency": 35588790, 00:09:20.834 "pending_free_request": 0, 00:09:20.834 "pending_rdma_read": 0, 00:09:20.834 "pending_rdma_write": 0, 00:09:20.834 "pending_rdma_send": 0, 00:09:20.834 "total_send_wrs": 313, 00:09:20.834 "send_doorbell_updates": 163, 00:09:20.834 "total_recv_wrs": 4281, 00:09:20.834 "recv_doorbell_updates": 163 00:09:20.834 }, 00:09:20.834 { 00:09:20.834 "name": "mlx5_1", 00:09:20.834 "polls": 3428340, 00:09:20.834 "idle_polls": 3428340, 00:09:20.834 "completions": 0, 00:09:20.834 "requests": 0, 00:09:20.834 "request_latency": 0, 00:09:20.834 "pending_free_request": 0, 00:09:20.834 "pending_rdma_read": 0, 00:09:20.834 "pending_rdma_write": 0, 00:09:20.834 "pending_rdma_send": 0, 00:09:20.834 "total_send_wrs": 0, 00:09:20.834 "send_doorbell_updates": 0, 00:09:20.834 "total_recv_wrs": 4096, 00:09:20.834 "recv_doorbell_updates": 1 00:09:20.834 } 00:09:20.834 ] 00:09:20.834 } 00:09:20.834 ] 00:09:20.834 }, 00:09:20.834 { 00:09:20.834 "name": "nvmf_tgt_poll_group_001", 00:09:20.834 "admin_qpairs": 2, 00:09:20.834 "io_qpairs": 26, 00:09:20.834 "current_admin_qpairs": 0, 00:09:20.834 "current_io_qpairs": 0, 00:09:20.834 "pending_bdev_io": 0, 00:09:20.834 "completed_nvme_io": 125, 00:09:20.834 "transports": [ 00:09:20.834 { 00:09:20.834 "trtype": "RDMA", 00:09:20.834 "pending_data_buffer": 0, 00:09:20.834 "devices": [ 00:09:20.834 { 00:09:20.834 "name": "mlx5_0", 00:09:20.834 "polls": 3424696, 00:09:20.834 "idle_polls": 3424373, 00:09:20.834 "completions": 362, 00:09:20.834 "requests": 181, 00:09:20.834 "request_latency": 34104944, 00:09:20.834 "pending_free_request": 0, 00:09:20.834 "pending_rdma_read": 0, 00:09:20.834 "pending_rdma_write": 0, 00:09:20.834 "pending_rdma_send": 0, 00:09:20.834 "total_send_wrs": 306, 00:09:20.834 "send_doorbell_updates": 157, 00:09:20.834 "total_recv_wrs": 4277, 00:09:20.834 "recv_doorbell_updates": 158 00:09:20.834 }, 00:09:20.834 { 00:09:20.834 "name": "mlx5_1", 00:09:20.834 "polls": 3424696, 00:09:20.834 "idle_polls": 3424696, 00:09:20.834 "completions": 0, 00:09:20.834 "requests": 0, 00:09:20.834 "request_latency": 0, 00:09:20.834 "pending_free_request": 0, 00:09:20.834 "pending_rdma_read": 0, 00:09:20.834 "pending_rdma_write": 0, 00:09:20.834 "pending_rdma_send": 0, 00:09:20.834 "total_send_wrs": 0, 00:09:20.834 "send_doorbell_updates": 0, 00:09:20.834 "total_recv_wrs": 4096, 00:09:20.834 "recv_doorbell_updates": 1 00:09:20.834 } 00:09:20.834 ] 00:09:20.834 } 00:09:20.834 ] 00:09:20.834 }, 00:09:20.834 { 00:09:20.834 "name": "nvmf_tgt_poll_group_002", 00:09:20.834 "admin_qpairs": 1, 00:09:20.834 "io_qpairs": 26, 00:09:20.834 "current_admin_qpairs": 0, 00:09:20.834 "current_io_qpairs": 0, 00:09:20.834 "pending_bdev_io": 0, 00:09:20.834 "completed_nvme_io": 75, 00:09:20.834 "transports": [ 00:09:20.834 { 00:09:20.834 "trtype": "RDMA", 00:09:20.834 "pending_data_buffer": 0, 00:09:20.834 "devices": [ 00:09:20.834 { 00:09:20.834 "name": "mlx5_0", 00:09:20.834 "polls": 3514460, 00:09:20.834 "idle_polls": 3514272, 00:09:20.834 "completions": 207, 00:09:20.834 "requests": 103, 00:09:20.834 "request_latency": 18777672, 00:09:20.834 "pending_free_request": 0, 00:09:20.834 "pending_rdma_read": 0, 00:09:20.834 "pending_rdma_write": 0, 00:09:20.834 "pending_rdma_send": 0, 00:09:20.834 "total_send_wrs": 166, 00:09:20.834 "send_doorbell_updates": 93, 00:09:20.834 "total_recv_wrs": 4199, 00:09:20.834 "recv_doorbell_updates": 93 00:09:20.834 }, 00:09:20.834 { 00:09:20.834 "name": "mlx5_1", 00:09:20.834 "polls": 3514460, 00:09:20.834 "idle_polls": 3514460, 00:09:20.834 "completions": 0, 00:09:20.834 "requests": 0, 00:09:20.834 "request_latency": 0, 00:09:20.834 "pending_free_request": 0, 00:09:20.834 "pending_rdma_read": 0, 00:09:20.834 "pending_rdma_write": 0, 00:09:20.834 "pending_rdma_send": 0, 00:09:20.834 "total_send_wrs": 0, 00:09:20.834 "send_doorbell_updates": 0, 00:09:20.834 "total_recv_wrs": 4096, 00:09:20.834 "recv_doorbell_updates": 1 00:09:20.834 } 00:09:20.834 ] 00:09:20.834 } 00:09:20.834 ] 00:09:20.834 }, 00:09:20.834 { 00:09:20.834 "name": "nvmf_tgt_poll_group_003", 00:09:20.834 "admin_qpairs": 2, 00:09:20.834 "io_qpairs": 26, 00:09:20.834 "current_admin_qpairs": 0, 00:09:20.834 "current_io_qpairs": 0, 00:09:20.834 "pending_bdev_io": 0, 00:09:20.834 "completed_nvme_io": 126, 00:09:20.834 "transports": [ 00:09:20.834 { 00:09:20.834 "trtype": "RDMA", 00:09:20.834 "pending_data_buffer": 0, 00:09:20.834 "devices": [ 00:09:20.834 { 00:09:20.834 "name": "mlx5_0", 00:09:20.834 "polls": 2725545, 00:09:20.834 "idle_polls": 2725225, 00:09:20.834 "completions": 364, 00:09:20.834 "requests": 182, 00:09:20.834 "request_latency": 36432448, 00:09:20.834 "pending_free_request": 0, 00:09:20.834 "pending_rdma_read": 0, 00:09:20.834 "pending_rdma_write": 0, 00:09:20.834 "pending_rdma_send": 0, 00:09:20.834 "total_send_wrs": 308, 00:09:20.834 "send_doorbell_updates": 157, 00:09:20.834 "total_recv_wrs": 4278, 00:09:20.834 "recv_doorbell_updates": 158 00:09:20.834 }, 00:09:20.834 { 00:09:20.834 "name": "mlx5_1", 00:09:20.834 "polls": 2725545, 00:09:20.834 "idle_polls": 2725545, 00:09:20.834 "completions": 0, 00:09:20.834 "requests": 0, 00:09:20.834 "request_latency": 0, 00:09:20.834 "pending_free_request": 0, 00:09:20.834 "pending_rdma_read": 0, 00:09:20.834 "pending_rdma_write": 0, 00:09:20.834 "pending_rdma_send": 0, 00:09:20.834 "total_send_wrs": 0, 00:09:20.834 "send_doorbell_updates": 0, 00:09:20.834 "total_recv_wrs": 4096, 00:09:20.834 "recv_doorbell_updates": 1 00:09:20.834 } 00:09:20.834 ] 00:09:20.834 } 00:09:20.834 ] 00:09:20.834 } 00:09:20.834 ] 00:09:20.834 }' 00:09:20.834 11:32:51 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:20.834 11:32:51 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:20.834 11:32:51 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:20.834 11:32:51 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:20.834 11:32:51 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:20.834 11:32:51 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:20.834 11:32:51 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:20.834 11:32:51 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:20.834 11:32:51 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:20.834 11:32:51 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:09:20.834 11:32:51 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:09:20.834 11:32:51 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:09:20.834 11:32:51 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:09:20.834 11:32:51 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:09:20.834 11:32:51 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:20.834 11:32:51 -- target/rpc.sh@117 -- # (( 1304 > 0 )) 00:09:20.834 11:32:51 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:09:20.834 11:32:51 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:09:20.834 11:32:51 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:09:20.834 11:32:51 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:20.834 11:32:51 -- target/rpc.sh@118 -- # (( 124903854 > 0 )) 00:09:20.834 11:32:51 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:20.834 11:32:51 -- target/rpc.sh@123 -- # nvmftestfini 00:09:20.834 11:32:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:20.834 11:32:51 -- nvmf/common.sh@117 -- # sync 00:09:20.834 11:32:51 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:20.834 11:32:51 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:20.834 11:32:51 -- nvmf/common.sh@120 -- # set +e 00:09:20.834 11:32:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:20.834 11:32:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:20.834 rmmod nvme_rdma 00:09:20.834 rmmod nvme_fabrics 00:09:20.834 11:32:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:21.094 11:32:51 -- nvmf/common.sh@124 -- # set -e 00:09:21.094 11:32:51 -- nvmf/common.sh@125 -- # return 0 00:09:21.094 11:32:51 -- nvmf/common.sh@478 -- # '[' -n 2935565 ']' 00:09:21.094 11:32:51 -- nvmf/common.sh@479 -- # killprocess 2935565 00:09:21.094 11:32:51 -- common/autotest_common.sh@946 -- # '[' -z 2935565 ']' 00:09:21.094 11:32:51 -- common/autotest_common.sh@950 -- # kill -0 2935565 00:09:21.094 11:32:51 -- common/autotest_common.sh@951 -- # uname 00:09:21.094 11:32:51 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:21.094 11:32:51 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2935565 00:09:21.094 11:32:51 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:21.094 11:32:51 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:21.094 11:32:51 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2935565' 00:09:21.094 killing process with pid 2935565 00:09:21.094 11:32:51 -- common/autotest_common.sh@965 -- # kill 2935565 00:09:21.094 [2024-05-15 11:32:51.653582] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:21.094 11:32:51 -- common/autotest_common.sh@970 -- # wait 2935565 00:09:21.094 [2024-05-15 11:32:51.740259] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:09:21.353 11:32:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:21.353 11:32:51 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:21.353 00:09:21.353 real 0m36.335s 00:09:21.353 user 2m3.235s 00:09:21.353 sys 0m5.986s 00:09:21.353 11:32:51 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:21.353 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:09:21.353 ************************************ 00:09:21.353 END TEST nvmf_rpc 00:09:21.353 ************************************ 00:09:21.353 11:32:52 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:09:21.353 11:32:52 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:21.353 11:32:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:21.353 11:32:52 -- common/autotest_common.sh@10 -- # set +x 00:09:21.353 ************************************ 00:09:21.353 START TEST nvmf_invalid 00:09:21.353 ************************************ 00:09:21.353 11:32:52 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:09:21.612 * Looking for test storage... 00:09:21.612 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:21.612 11:32:52 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.612 11:32:52 -- nvmf/common.sh@7 -- # uname -s 00:09:21.612 11:32:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.612 11:32:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.612 11:32:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.612 11:32:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.612 11:32:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.612 11:32:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.612 11:32:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.612 11:32:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.612 11:32:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.612 11:32:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.612 11:32:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:09:21.612 11:32:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:09:21.612 11:32:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.612 11:32:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.612 11:32:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.612 11:32:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.612 11:32:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:21.612 11:32:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.612 11:32:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.612 11:32:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.612 11:32:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.612 11:32:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.612 11:32:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.612 11:32:52 -- paths/export.sh@5 -- # export PATH 00:09:21.612 11:32:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.612 11:32:52 -- nvmf/common.sh@47 -- # : 0 00:09:21.612 11:32:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.612 11:32:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.612 11:32:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.612 11:32:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.612 11:32:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.612 11:32:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.612 11:32:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.612 11:32:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.612 11:32:52 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:21.612 11:32:52 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:21.612 11:32:52 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:21.612 11:32:52 -- target/invalid.sh@14 -- # target=foobar 00:09:21.612 11:32:52 -- target/invalid.sh@16 -- # RANDOM=0 00:09:21.612 11:32:52 -- target/invalid.sh@34 -- # nvmftestinit 00:09:21.612 11:32:52 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:21.612 11:32:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.612 11:32:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:21.612 11:32:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:21.612 11:32:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:21.612 11:32:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.612 11:32:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:21.612 11:32:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.612 11:32:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:21.612 11:32:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:21.612 11:32:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:21.612 11:32:52 -- common/autotest_common.sh@10 -- # set +x 00:09:28.184 11:32:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:28.185 11:32:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:28.185 11:32:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:28.185 11:32:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:28.185 11:32:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:28.185 11:32:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:28.185 11:32:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:28.185 11:32:58 -- nvmf/common.sh@295 -- # net_devs=() 00:09:28.185 11:32:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:28.185 11:32:58 -- nvmf/common.sh@296 -- # e810=() 00:09:28.185 11:32:58 -- nvmf/common.sh@296 -- # local -ga e810 00:09:28.185 11:32:58 -- nvmf/common.sh@297 -- # x722=() 00:09:28.185 11:32:58 -- nvmf/common.sh@297 -- # local -ga x722 00:09:28.185 11:32:58 -- nvmf/common.sh@298 -- # mlx=() 00:09:28.185 11:32:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:28.185 11:32:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:28.185 11:32:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:28.185 11:32:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:28.185 11:32:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:28.185 11:32:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:28.185 11:32:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:28.185 11:32:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:28.185 11:32:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:28.185 11:32:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:28.185 11:32:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:28.185 11:32:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:28.185 11:32:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:28.185 11:32:58 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:28.185 11:32:58 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:28.185 11:32:58 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:28.185 11:32:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:28.185 11:32:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:28.185 11:32:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:28.185 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:28.185 11:32:58 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:28.185 11:32:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:28.185 11:32:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:28.185 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:28.185 11:32:58 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:28.185 11:32:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:28.185 11:32:58 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:28.185 11:32:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.185 11:32:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:28.185 11:32:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.185 11:32:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:28.185 Found net devices under 0000:18:00.0: mlx_0_0 00:09:28.185 11:32:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.185 11:32:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:28.185 11:32:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.185 11:32:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:28.185 11:32:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.185 11:32:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:28.185 Found net devices under 0000:18:00.1: mlx_0_1 00:09:28.185 11:32:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.185 11:32:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:28.185 11:32:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:28.185 11:32:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:28.185 11:32:58 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:28.185 11:32:58 -- nvmf/common.sh@58 -- # uname 00:09:28.185 11:32:58 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:28.185 11:32:58 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:28.185 11:32:58 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:28.185 11:32:58 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:28.185 11:32:58 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:28.185 11:32:58 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:28.185 11:32:58 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:28.185 11:32:58 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:28.185 11:32:58 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:28.185 11:32:58 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:28.185 11:32:58 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:28.185 11:32:58 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:28.185 11:32:58 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:28.185 11:32:58 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:28.185 11:32:58 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:28.185 11:32:58 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:28.185 11:32:58 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:28.185 11:32:58 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:28.185 11:32:58 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:28.185 11:32:58 -- nvmf/common.sh@105 -- # continue 2 00:09:28.185 11:32:58 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:28.185 11:32:58 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:28.185 11:32:58 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:28.185 11:32:58 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:28.185 11:32:58 -- nvmf/common.sh@105 -- # continue 2 00:09:28.185 11:32:58 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:28.185 11:32:58 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:28.185 11:32:58 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:28.185 11:32:58 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:28.185 11:32:58 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:28.185 11:32:58 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:28.185 11:32:58 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:28.185 11:32:58 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:28.185 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:28.185 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:09:28.185 altname enp24s0f0np0 00:09:28.185 altname ens785f0np0 00:09:28.185 inet 192.168.100.8/24 scope global mlx_0_0 00:09:28.185 valid_lft forever preferred_lft forever 00:09:28.185 11:32:58 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:28.185 11:32:58 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:28.185 11:32:58 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:28.185 11:32:58 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:28.185 11:32:58 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:28.185 11:32:58 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:28.185 11:32:58 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:28.185 11:32:58 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:28.185 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:28.185 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:09:28.185 altname enp24s0f1np1 00:09:28.185 altname ens785f1np1 00:09:28.185 inet 192.168.100.9/24 scope global mlx_0_1 00:09:28.185 valid_lft forever preferred_lft forever 00:09:28.185 11:32:58 -- nvmf/common.sh@411 -- # return 0 00:09:28.185 11:32:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:28.185 11:32:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:28.185 11:32:58 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:28.185 11:32:58 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:28.186 11:32:58 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:28.186 11:32:58 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:28.186 11:32:58 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:28.186 11:32:58 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:28.186 11:32:58 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:28.186 11:32:58 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:28.186 11:32:58 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:28.186 11:32:58 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:28.186 11:32:58 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:28.186 11:32:58 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:28.186 11:32:58 -- nvmf/common.sh@105 -- # continue 2 00:09:28.186 11:32:58 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:28.186 11:32:58 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:28.186 11:32:58 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:28.186 11:32:58 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:28.186 11:32:58 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:28.186 11:32:58 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:28.186 11:32:58 -- nvmf/common.sh@105 -- # continue 2 00:09:28.186 11:32:58 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:28.186 11:32:58 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:28.186 11:32:58 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:28.186 11:32:58 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:28.186 11:32:58 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:28.186 11:32:58 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:28.186 11:32:58 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:28.186 11:32:58 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:28.186 11:32:58 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:28.186 11:32:58 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:28.186 11:32:58 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:28.186 11:32:58 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:28.186 11:32:58 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:28.186 192.168.100.9' 00:09:28.186 11:32:58 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:28.186 192.168.100.9' 00:09:28.186 11:32:58 -- nvmf/common.sh@446 -- # head -n 1 00:09:28.186 11:32:58 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:28.186 11:32:58 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:28.186 192.168.100.9' 00:09:28.186 11:32:58 -- nvmf/common.sh@447 -- # tail -n +2 00:09:28.186 11:32:58 -- nvmf/common.sh@447 -- # head -n 1 00:09:28.186 11:32:58 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:28.186 11:32:58 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:28.186 11:32:58 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:28.186 11:32:58 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:28.186 11:32:58 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:28.186 11:32:58 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:28.186 11:32:58 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:28.186 11:32:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:28.186 11:32:58 -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:28.186 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.186 11:32:58 -- nvmf/common.sh@470 -- # nvmfpid=2942665 00:09:28.186 11:32:58 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:28.186 11:32:58 -- nvmf/common.sh@471 -- # waitforlisten 2942665 00:09:28.186 11:32:58 -- common/autotest_common.sh@827 -- # '[' -z 2942665 ']' 00:09:28.186 11:32:58 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.186 11:32:58 -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:28.186 11:32:58 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.186 11:32:58 -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:28.186 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.186 [2024-05-15 11:32:58.362005] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:09:28.186 [2024-05-15 11:32:58.362073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.186 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.186 [2024-05-15 11:32:58.439337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:28.186 [2024-05-15 11:32:58.523713] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.186 [2024-05-15 11:32:58.523760] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.186 [2024-05-15 11:32:58.523770] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.186 [2024-05-15 11:32:58.523778] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.186 [2024-05-15 11:32:58.523785] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.186 [2024-05-15 11:32:58.523838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.186 [2024-05-15 11:32:58.523939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.186 [2024-05-15 11:32:58.524015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.186 [2024-05-15 11:32:58.524017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.445 11:32:59 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:28.445 11:32:59 -- common/autotest_common.sh@860 -- # return 0 00:09:28.445 11:32:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:28.445 11:32:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:28.445 11:32:59 -- common/autotest_common.sh@10 -- # set +x 00:09:28.704 11:32:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.704 11:32:59 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:28.704 11:32:59 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29818 00:09:28.704 [2024-05-15 11:32:59.390455] nvmf_rpc.c: 391:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:28.704 11:32:59 -- target/invalid.sh@40 -- # out='request: 00:09:28.704 { 00:09:28.704 "nqn": "nqn.2016-06.io.spdk:cnode29818", 00:09:28.704 "tgt_name": "foobar", 00:09:28.704 "method": "nvmf_create_subsystem", 00:09:28.704 "req_id": 1 00:09:28.704 } 00:09:28.704 Got JSON-RPC error response 00:09:28.704 response: 00:09:28.704 { 00:09:28.704 "code": -32603, 00:09:28.704 "message": "Unable to find target foobar" 00:09:28.704 }' 00:09:28.704 11:32:59 -- target/invalid.sh@41 -- # [[ request: 00:09:28.704 { 00:09:28.704 "nqn": "nqn.2016-06.io.spdk:cnode29818", 00:09:28.704 "tgt_name": "foobar", 00:09:28.704 "method": "nvmf_create_subsystem", 00:09:28.704 "req_id": 1 00:09:28.704 } 00:09:28.704 Got JSON-RPC error response 00:09:28.704 response: 00:09:28.704 { 00:09:28.704 "code": -32603, 00:09:28.704 "message": "Unable to find target foobar" 00:09:28.704 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:28.704 11:32:59 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:28.704 11:32:59 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13867 00:09:28.963 [2024-05-15 11:32:59.587164] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13867: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:28.963 11:32:59 -- target/invalid.sh@45 -- # out='request: 00:09:28.963 { 00:09:28.963 "nqn": "nqn.2016-06.io.spdk:cnode13867", 00:09:28.963 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:28.963 "method": "nvmf_create_subsystem", 00:09:28.963 "req_id": 1 00:09:28.963 } 00:09:28.963 Got JSON-RPC error response 00:09:28.963 response: 00:09:28.963 { 00:09:28.963 "code": -32602, 00:09:28.963 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:28.963 }' 00:09:28.963 11:32:59 -- target/invalid.sh@46 -- # [[ request: 00:09:28.963 { 00:09:28.963 "nqn": "nqn.2016-06.io.spdk:cnode13867", 00:09:28.963 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:28.963 "method": "nvmf_create_subsystem", 00:09:28.963 "req_id": 1 00:09:28.963 } 00:09:28.963 Got JSON-RPC error response 00:09:28.963 response: 00:09:28.963 { 00:09:28.963 "code": -32602, 00:09:28.963 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:28.963 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:28.963 11:32:59 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:28.963 11:32:59 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27809 00:09:29.222 [2024-05-15 11:32:59.771716] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27809: invalid model number 'SPDK_Controller' 00:09:29.222 11:32:59 -- target/invalid.sh@50 -- # out='request: 00:09:29.222 { 00:09:29.222 "nqn": "nqn.2016-06.io.spdk:cnode27809", 00:09:29.222 "model_number": "SPDK_Controller\u001f", 00:09:29.222 "method": "nvmf_create_subsystem", 00:09:29.222 "req_id": 1 00:09:29.222 } 00:09:29.222 Got JSON-RPC error response 00:09:29.222 response: 00:09:29.222 { 00:09:29.222 "code": -32602, 00:09:29.222 "message": "Invalid MN SPDK_Controller\u001f" 00:09:29.222 }' 00:09:29.223 11:32:59 -- target/invalid.sh@51 -- # [[ request: 00:09:29.223 { 00:09:29.223 "nqn": "nqn.2016-06.io.spdk:cnode27809", 00:09:29.223 "model_number": "SPDK_Controller\u001f", 00:09:29.223 "method": "nvmf_create_subsystem", 00:09:29.223 "req_id": 1 00:09:29.223 } 00:09:29.223 Got JSON-RPC error response 00:09:29.223 response: 00:09:29.223 { 00:09:29.223 "code": -32602, 00:09:29.223 "message": "Invalid MN SPDK_Controller\u001f" 00:09:29.223 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:29.223 11:32:59 -- target/invalid.sh@54 -- # gen_random_s 21 00:09:29.223 11:32:59 -- target/invalid.sh@19 -- # local length=21 ll 00:09:29.223 11:32:59 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:29.223 11:32:59 -- target/invalid.sh@21 -- # local chars 00:09:29.223 11:32:59 -- target/invalid.sh@22 -- # local string 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 59 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=';' 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 47 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=/ 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 93 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=']' 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 57 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=9 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 83 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=S 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 91 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+='[' 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 84 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=T 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 123 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+='{' 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 86 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=V 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 122 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=z 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 74 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=J 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 46 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=. 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 90 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=Z 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 106 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=j 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 56 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=8 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 123 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+='{' 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 87 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=W 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 66 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=B 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 107 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=k 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 47 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=/ 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # printf %x 83 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:29.223 11:32:59 -- target/invalid.sh@25 -- # string+=S 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.223 11:32:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.223 11:32:59 -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:09:29.223 11:32:59 -- target/invalid.sh@31 -- # echo ';/]9S[T{VzJ.Zj8{WBk/S' 00:09:29.223 11:32:59 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ';/]9S[T{VzJ.Zj8{WBk/S' nqn.2016-06.io.spdk:cnode30130 00:09:29.482 [2024-05-15 11:33:00.120932] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30130: invalid serial number ';/]9S[T{VzJ.Zj8{WBk/S' 00:09:29.482 11:33:00 -- target/invalid.sh@54 -- # out='request: 00:09:29.482 { 00:09:29.482 "nqn": "nqn.2016-06.io.spdk:cnode30130", 00:09:29.482 "serial_number": ";/]9S[T{VzJ.Zj8{WBk/S", 00:09:29.482 "method": "nvmf_create_subsystem", 00:09:29.482 "req_id": 1 00:09:29.482 } 00:09:29.482 Got JSON-RPC error response 00:09:29.482 response: 00:09:29.482 { 00:09:29.482 "code": -32602, 00:09:29.482 "message": "Invalid SN ;/]9S[T{VzJ.Zj8{WBk/S" 00:09:29.482 }' 00:09:29.482 11:33:00 -- target/invalid.sh@55 -- # [[ request: 00:09:29.482 { 00:09:29.482 "nqn": "nqn.2016-06.io.spdk:cnode30130", 00:09:29.482 "serial_number": ";/]9S[T{VzJ.Zj8{WBk/S", 00:09:29.482 "method": "nvmf_create_subsystem", 00:09:29.482 "req_id": 1 00:09:29.482 } 00:09:29.482 Got JSON-RPC error response 00:09:29.482 response: 00:09:29.482 { 00:09:29.482 "code": -32602, 00:09:29.482 "message": "Invalid SN ;/]9S[T{VzJ.Zj8{WBk/S" 00:09:29.482 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:29.482 11:33:00 -- target/invalid.sh@58 -- # gen_random_s 41 00:09:29.482 11:33:00 -- target/invalid.sh@19 -- # local length=41 ll 00:09:29.482 11:33:00 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:29.482 11:33:00 -- target/invalid.sh@21 -- # local chars 00:09:29.482 11:33:00 -- target/invalid.sh@22 -- # local string 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # printf %x 88 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # string+=X 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # printf %x 108 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # string+=l 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # printf %x 99 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # string+=c 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # printf %x 60 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # string+='<' 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # printf %x 60 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # string+='<' 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # printf %x 125 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # string+='}' 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # printf %x 126 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # string+='~' 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # printf %x 119 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # string+=w 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # printf %x 36 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # string+='$' 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # printf %x 100 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # string+=d 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # printf %x 106 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:29.482 11:33:00 -- target/invalid.sh@25 -- # string+=j 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.482 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 125 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+='}' 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 98 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=b 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 127 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=$'\177' 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 108 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=l 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 113 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=q 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 32 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=' ' 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 85 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=U 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 100 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=d 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 57 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=9 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 33 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+='!' 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 79 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=O 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 46 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=. 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 77 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=M 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 124 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+='|' 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 73 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=I 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 117 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=u 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 118 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=v 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 67 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=C 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 82 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=R 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 45 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=- 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 121 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+=y 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 92 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+='\' 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # printf %x 91 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:29.741 11:33:00 -- target/invalid.sh@25 -- # string+='[' 00:09:29.741 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.742 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # printf %x 47 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # string+=/ 00:09:29.742 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.742 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # printf %x 106 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # string+=j 00:09:29.742 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.742 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # printf %x 106 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # string+=j 00:09:29.742 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.742 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # printf %x 94 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # string+='^' 00:09:29.742 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.742 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # printf %x 72 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # string+=H 00:09:29.742 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.742 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # printf %x 99 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # string+=c 00:09:29.742 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.742 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # printf %x 90 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:29.742 11:33:00 -- target/invalid.sh@25 -- # string+=Z 00:09:29.742 11:33:00 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:29.742 11:33:00 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:29.742 11:33:00 -- target/invalid.sh@28 -- # [[ X == \- ]] 00:09:29.742 11:33:00 -- target/invalid.sh@31 -- # echo 'Xlc<<}~w$dj}blq Ud9!O.M|IuvCR-y\[/jj^HcZ' 00:09:29.742 11:33:00 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Xlc<<}~w$dj}blq Ud9!O.M|IuvCR-y\[/jj^HcZ' nqn.2016-06.io.spdk:cnode4520 00:09:30.001 [2024-05-15 11:33:00.634656] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4520: invalid model number 'Xlc<<}~w$dj}blq Ud9!O.M|IuvCR-y\[/jj^HcZ' 00:09:30.001 11:33:00 -- target/invalid.sh@58 -- # out='request: 00:09:30.001 { 00:09:30.001 "nqn": "nqn.2016-06.io.spdk:cnode4520", 00:09:30.001 "model_number": "Xlc<<}~w$dj}b\u007flq Ud9!O.M|IuvCR-y\\[/jj^HcZ", 00:09:30.001 "method": "nvmf_create_subsystem", 00:09:30.001 "req_id": 1 00:09:30.001 } 00:09:30.001 Got JSON-RPC error response 00:09:30.001 response: 00:09:30.001 { 00:09:30.001 "code": -32602, 00:09:30.001 "message": "Invalid MN Xlc<<}~w$dj}b\u007flq Ud9!O.M|IuvCR-y\\[/jj^HcZ" 00:09:30.001 }' 00:09:30.001 11:33:00 -- target/invalid.sh@59 -- # [[ request: 00:09:30.001 { 00:09:30.001 "nqn": "nqn.2016-06.io.spdk:cnode4520", 00:09:30.001 "model_number": "Xlc<<}~w$dj}b\u007flq Ud9!O.M|IuvCR-y\\[/jj^HcZ", 00:09:30.001 "method": "nvmf_create_subsystem", 00:09:30.001 "req_id": 1 00:09:30.001 } 00:09:30.001 Got JSON-RPC error response 00:09:30.001 response: 00:09:30.001 { 00:09:30.001 "code": -32602, 00:09:30.001 "message": "Invalid MN Xlc<<}~w$dj}b\u007flq Ud9!O.M|IuvCR-y\\[/jj^HcZ" 00:09:30.001 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:30.001 11:33:00 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:09:30.259 [2024-05-15 11:33:00.865601] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ccd7e0/0x1cd1cd0) succeed. 00:09:30.259 [2024-05-15 11:33:00.876078] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ccee20/0x1d13360) succeed. 00:09:30.518 11:33:01 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:30.518 11:33:01 -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:09:30.518 11:33:01 -- target/invalid.sh@67 -- # head -n 1 00:09:30.518 11:33:01 -- target/invalid.sh@67 -- # echo '192.168.100.8 00:09:30.518 192.168.100.9' 00:09:30.518 11:33:01 -- target/invalid.sh@67 -- # IP=192.168.100.8 00:09:30.518 11:33:01 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:09:30.776 [2024-05-15 11:33:01.386497] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:30.776 [2024-05-15 11:33:01.386586] nvmf_rpc.c: 789:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:30.776 11:33:01 -- target/invalid.sh@69 -- # out='request: 00:09:30.776 { 00:09:30.776 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:30.776 "listen_address": { 00:09:30.776 "trtype": "rdma", 00:09:30.776 "traddr": "192.168.100.8", 00:09:30.776 "trsvcid": "4421" 00:09:30.776 }, 00:09:30.776 "method": "nvmf_subsystem_remove_listener", 00:09:30.776 "req_id": 1 00:09:30.776 } 00:09:30.776 Got JSON-RPC error response 00:09:30.776 response: 00:09:30.776 { 00:09:30.776 "code": -32602, 00:09:30.776 "message": "Invalid parameters" 00:09:30.776 }' 00:09:30.776 11:33:01 -- target/invalid.sh@70 -- # [[ request: 00:09:30.776 { 00:09:30.776 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:30.776 "listen_address": { 00:09:30.776 "trtype": "rdma", 00:09:30.776 "traddr": "192.168.100.8", 00:09:30.776 "trsvcid": "4421" 00:09:30.776 }, 00:09:30.776 "method": "nvmf_subsystem_remove_listener", 00:09:30.776 "req_id": 1 00:09:30.776 } 00:09:30.776 Got JSON-RPC error response 00:09:30.776 response: 00:09:30.776 { 00:09:30.776 "code": -32602, 00:09:30.776 "message": "Invalid parameters" 00:09:30.776 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:30.776 11:33:01 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2695 -i 0 00:09:31.035 [2024-05-15 11:33:01.583243] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2695: invalid cntlid range [0-65519] 00:09:31.035 11:33:01 -- target/invalid.sh@73 -- # out='request: 00:09:31.035 { 00:09:31.035 "nqn": "nqn.2016-06.io.spdk:cnode2695", 00:09:31.035 "min_cntlid": 0, 00:09:31.035 "method": "nvmf_create_subsystem", 00:09:31.035 "req_id": 1 00:09:31.035 } 00:09:31.035 Got JSON-RPC error response 00:09:31.035 response: 00:09:31.035 { 00:09:31.035 "code": -32602, 00:09:31.035 "message": "Invalid cntlid range [0-65519]" 00:09:31.035 }' 00:09:31.035 11:33:01 -- target/invalid.sh@74 -- # [[ request: 00:09:31.035 { 00:09:31.035 "nqn": "nqn.2016-06.io.spdk:cnode2695", 00:09:31.035 "min_cntlid": 0, 00:09:31.035 "method": "nvmf_create_subsystem", 00:09:31.035 "req_id": 1 00:09:31.035 } 00:09:31.035 Got JSON-RPC error response 00:09:31.035 response: 00:09:31.035 { 00:09:31.035 "code": -32602, 00:09:31.035 "message": "Invalid cntlid range [0-65519]" 00:09:31.035 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:31.035 11:33:01 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7041 -i 65520 00:09:31.035 [2024-05-15 11:33:01.779939] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7041: invalid cntlid range [65520-65519] 00:09:31.294 11:33:01 -- target/invalid.sh@75 -- # out='request: 00:09:31.294 { 00:09:31.294 "nqn": "nqn.2016-06.io.spdk:cnode7041", 00:09:31.294 "min_cntlid": 65520, 00:09:31.294 "method": "nvmf_create_subsystem", 00:09:31.294 "req_id": 1 00:09:31.294 } 00:09:31.294 Got JSON-RPC error response 00:09:31.294 response: 00:09:31.294 { 00:09:31.294 "code": -32602, 00:09:31.294 "message": "Invalid cntlid range [65520-65519]" 00:09:31.294 }' 00:09:31.294 11:33:01 -- target/invalid.sh@76 -- # [[ request: 00:09:31.294 { 00:09:31.294 "nqn": "nqn.2016-06.io.spdk:cnode7041", 00:09:31.294 "min_cntlid": 65520, 00:09:31.294 "method": "nvmf_create_subsystem", 00:09:31.294 "req_id": 1 00:09:31.294 } 00:09:31.294 Got JSON-RPC error response 00:09:31.294 response: 00:09:31.294 { 00:09:31.294 "code": -32602, 00:09:31.294 "message": "Invalid cntlid range [65520-65519]" 00:09:31.294 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:31.294 11:33:01 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23633 -I 0 00:09:31.294 [2024-05-15 11:33:01.976660] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23633: invalid cntlid range [1-0] 00:09:31.294 11:33:02 -- target/invalid.sh@77 -- # out='request: 00:09:31.294 { 00:09:31.294 "nqn": "nqn.2016-06.io.spdk:cnode23633", 00:09:31.294 "max_cntlid": 0, 00:09:31.294 "method": "nvmf_create_subsystem", 00:09:31.294 "req_id": 1 00:09:31.294 } 00:09:31.294 Got JSON-RPC error response 00:09:31.294 response: 00:09:31.294 { 00:09:31.294 "code": -32602, 00:09:31.294 "message": "Invalid cntlid range [1-0]" 00:09:31.294 }' 00:09:31.294 11:33:02 -- target/invalid.sh@78 -- # [[ request: 00:09:31.294 { 00:09:31.294 "nqn": "nqn.2016-06.io.spdk:cnode23633", 00:09:31.294 "max_cntlid": 0, 00:09:31.294 "method": "nvmf_create_subsystem", 00:09:31.294 "req_id": 1 00:09:31.294 } 00:09:31.294 Got JSON-RPC error response 00:09:31.294 response: 00:09:31.294 { 00:09:31.294 "code": -32602, 00:09:31.294 "message": "Invalid cntlid range [1-0]" 00:09:31.294 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:31.294 11:33:02 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4801 -I 65520 00:09:31.553 [2024-05-15 11:33:02.177448] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4801: invalid cntlid range [1-65520] 00:09:31.553 11:33:02 -- target/invalid.sh@79 -- # out='request: 00:09:31.553 { 00:09:31.553 "nqn": "nqn.2016-06.io.spdk:cnode4801", 00:09:31.553 "max_cntlid": 65520, 00:09:31.553 "method": "nvmf_create_subsystem", 00:09:31.553 "req_id": 1 00:09:31.553 } 00:09:31.553 Got JSON-RPC error response 00:09:31.553 response: 00:09:31.553 { 00:09:31.553 "code": -32602, 00:09:31.553 "message": "Invalid cntlid range [1-65520]" 00:09:31.553 }' 00:09:31.553 11:33:02 -- target/invalid.sh@80 -- # [[ request: 00:09:31.553 { 00:09:31.553 "nqn": "nqn.2016-06.io.spdk:cnode4801", 00:09:31.553 "max_cntlid": 65520, 00:09:31.553 "method": "nvmf_create_subsystem", 00:09:31.553 "req_id": 1 00:09:31.553 } 00:09:31.553 Got JSON-RPC error response 00:09:31.553 response: 00:09:31.553 { 00:09:31.553 "code": -32602, 00:09:31.553 "message": "Invalid cntlid range [1-65520]" 00:09:31.553 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:31.553 11:33:02 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6027 -i 6 -I 5 00:09:31.811 [2024-05-15 11:33:02.374161] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6027: invalid cntlid range [6-5] 00:09:31.811 11:33:02 -- target/invalid.sh@83 -- # out='request: 00:09:31.811 { 00:09:31.811 "nqn": "nqn.2016-06.io.spdk:cnode6027", 00:09:31.811 "min_cntlid": 6, 00:09:31.811 "max_cntlid": 5, 00:09:31.811 "method": "nvmf_create_subsystem", 00:09:31.811 "req_id": 1 00:09:31.811 } 00:09:31.811 Got JSON-RPC error response 00:09:31.811 response: 00:09:31.811 { 00:09:31.811 "code": -32602, 00:09:31.811 "message": "Invalid cntlid range [6-5]" 00:09:31.811 }' 00:09:31.811 11:33:02 -- target/invalid.sh@84 -- # [[ request: 00:09:31.811 { 00:09:31.811 "nqn": "nqn.2016-06.io.spdk:cnode6027", 00:09:31.811 "min_cntlid": 6, 00:09:31.811 "max_cntlid": 5, 00:09:31.811 "method": "nvmf_create_subsystem", 00:09:31.811 "req_id": 1 00:09:31.811 } 00:09:31.811 Got JSON-RPC error response 00:09:31.811 response: 00:09:31.811 { 00:09:31.811 "code": -32602, 00:09:31.811 "message": "Invalid cntlid range [6-5]" 00:09:31.811 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:31.811 11:33:02 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:31.811 11:33:02 -- target/invalid.sh@87 -- # out='request: 00:09:31.811 { 00:09:31.811 "name": "foobar", 00:09:31.811 "method": "nvmf_delete_target", 00:09:31.811 "req_id": 1 00:09:31.811 } 00:09:31.811 Got JSON-RPC error response 00:09:31.811 response: 00:09:31.811 { 00:09:31.811 "code": -32602, 00:09:31.811 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:31.811 }' 00:09:31.811 11:33:02 -- target/invalid.sh@88 -- # [[ request: 00:09:31.811 { 00:09:31.811 "name": "foobar", 00:09:31.811 "method": "nvmf_delete_target", 00:09:31.811 "req_id": 1 00:09:31.811 } 00:09:31.811 Got JSON-RPC error response 00:09:31.811 response: 00:09:31.811 { 00:09:31.811 "code": -32602, 00:09:31.811 "message": "The specified target doesn't exist, cannot delete it." 00:09:31.811 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:31.811 11:33:02 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:31.811 11:33:02 -- target/invalid.sh@91 -- # nvmftestfini 00:09:31.811 11:33:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:31.811 11:33:02 -- nvmf/common.sh@117 -- # sync 00:09:31.811 11:33:02 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:31.811 11:33:02 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:31.811 11:33:02 -- nvmf/common.sh@120 -- # set +e 00:09:31.811 11:33:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:31.811 11:33:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:31.811 rmmod nvme_rdma 00:09:31.811 rmmod nvme_fabrics 00:09:31.811 11:33:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:31.811 11:33:02 -- nvmf/common.sh@124 -- # set -e 00:09:31.811 11:33:02 -- nvmf/common.sh@125 -- # return 0 00:09:31.811 11:33:02 -- nvmf/common.sh@478 -- # '[' -n 2942665 ']' 00:09:31.811 11:33:02 -- nvmf/common.sh@479 -- # killprocess 2942665 00:09:31.811 11:33:02 -- common/autotest_common.sh@946 -- # '[' -z 2942665 ']' 00:09:31.811 11:33:02 -- common/autotest_common.sh@950 -- # kill -0 2942665 00:09:31.811 11:33:02 -- common/autotest_common.sh@951 -- # uname 00:09:31.811 11:33:02 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:32.069 11:33:02 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2942665 00:09:32.069 11:33:02 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:32.069 11:33:02 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:32.069 11:33:02 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2942665' 00:09:32.069 killing process with pid 2942665 00:09:32.069 11:33:02 -- common/autotest_common.sh@965 -- # kill 2942665 00:09:32.069 [2024-05-15 11:33:02.615403] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:32.069 11:33:02 -- common/autotest_common.sh@970 -- # wait 2942665 00:09:32.069 [2024-05-15 11:33:02.701959] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:09:32.329 11:33:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:32.329 11:33:02 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:32.329 00:09:32.329 real 0m10.823s 00:09:32.329 user 0m21.283s 00:09:32.329 sys 0m5.915s 00:09:32.329 11:33:02 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:32.329 11:33:02 -- common/autotest_common.sh@10 -- # set +x 00:09:32.329 ************************************ 00:09:32.329 END TEST nvmf_invalid 00:09:32.329 ************************************ 00:09:32.329 11:33:02 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:09:32.329 11:33:02 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:32.329 11:33:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:32.329 11:33:02 -- common/autotest_common.sh@10 -- # set +x 00:09:32.329 ************************************ 00:09:32.329 START TEST nvmf_abort 00:09:32.329 ************************************ 00:09:32.329 11:33:02 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:09:32.329 * Looking for test storage... 00:09:32.329 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:32.329 11:33:03 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:32.329 11:33:03 -- nvmf/common.sh@7 -- # uname -s 00:09:32.329 11:33:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.329 11:33:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.329 11:33:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.329 11:33:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.329 11:33:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.329 11:33:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.329 11:33:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.329 11:33:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.329 11:33:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.329 11:33:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.329 11:33:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:09:32.329 11:33:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:09:32.329 11:33:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.329 11:33:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.329 11:33:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:32.329 11:33:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.329 11:33:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:32.329 11:33:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.329 11:33:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.329 11:33:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.329 11:33:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.329 11:33:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.329 11:33:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.329 11:33:03 -- paths/export.sh@5 -- # export PATH 00:09:32.329 11:33:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.329 11:33:03 -- nvmf/common.sh@47 -- # : 0 00:09:32.329 11:33:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:32.329 11:33:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:32.329 11:33:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.329 11:33:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.329 11:33:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.329 11:33:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:32.329 11:33:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:32.329 11:33:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:32.329 11:33:03 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:32.329 11:33:03 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:32.329 11:33:03 -- target/abort.sh@14 -- # nvmftestinit 00:09:32.329 11:33:03 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:32.329 11:33:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.329 11:33:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:32.329 11:33:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:32.329 11:33:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:32.329 11:33:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.329 11:33:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:32.329 11:33:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.329 11:33:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:32.329 11:33:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:32.329 11:33:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:32.329 11:33:03 -- common/autotest_common.sh@10 -- # set +x 00:09:38.896 11:33:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:38.896 11:33:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:38.896 11:33:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:38.896 11:33:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:38.896 11:33:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:38.896 11:33:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:38.896 11:33:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:38.896 11:33:08 -- nvmf/common.sh@295 -- # net_devs=() 00:09:38.896 11:33:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:38.896 11:33:08 -- nvmf/common.sh@296 -- # e810=() 00:09:38.896 11:33:08 -- nvmf/common.sh@296 -- # local -ga e810 00:09:38.896 11:33:08 -- nvmf/common.sh@297 -- # x722=() 00:09:38.896 11:33:08 -- nvmf/common.sh@297 -- # local -ga x722 00:09:38.896 11:33:08 -- nvmf/common.sh@298 -- # mlx=() 00:09:38.896 11:33:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:38.896 11:33:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.896 11:33:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.896 11:33:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.896 11:33:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.896 11:33:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.896 11:33:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.896 11:33:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.896 11:33:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.896 11:33:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.896 11:33:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.896 11:33:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.896 11:33:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:38.896 11:33:08 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:38.896 11:33:08 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:38.896 11:33:08 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:38.896 11:33:08 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:38.896 11:33:08 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:38.896 11:33:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:38.896 11:33:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.896 11:33:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:38.896 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:38.896 11:33:08 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:38.896 11:33:08 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:38.896 11:33:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:38.896 11:33:08 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:38.896 11:33:08 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:38.896 11:33:08 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:38.896 11:33:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.896 11:33:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:38.896 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:38.896 11:33:08 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:38.896 11:33:08 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:38.896 11:33:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:38.896 11:33:08 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:38.896 11:33:08 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:38.896 11:33:08 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:38.896 11:33:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:38.896 11:33:08 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:38.896 11:33:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.896 11:33:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.896 11:33:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:38.896 11:33:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.896 11:33:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:38.896 Found net devices under 0000:18:00.0: mlx_0_0 00:09:38.896 11:33:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.896 11:33:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.896 11:33:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.896 11:33:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:38.896 11:33:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.896 11:33:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:38.896 Found net devices under 0000:18:00.1: mlx_0_1 00:09:38.896 11:33:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.896 11:33:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:38.896 11:33:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:38.896 11:33:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:38.896 11:33:08 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:38.896 11:33:08 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:38.896 11:33:08 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:38.896 11:33:08 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:38.896 11:33:08 -- nvmf/common.sh@58 -- # uname 00:09:38.896 11:33:08 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:38.896 11:33:08 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:38.896 11:33:08 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:38.896 11:33:08 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:38.896 11:33:08 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:38.896 11:33:08 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:38.896 11:33:08 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:38.896 11:33:08 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:38.896 11:33:08 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:38.896 11:33:08 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:38.896 11:33:08 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:38.896 11:33:08 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:38.896 11:33:08 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:38.896 11:33:08 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:38.896 11:33:08 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:38.896 11:33:08 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:38.896 11:33:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:38.896 11:33:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.896 11:33:08 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:38.896 11:33:08 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:38.896 11:33:08 -- nvmf/common.sh@105 -- # continue 2 00:09:38.896 11:33:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:38.896 11:33:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.897 11:33:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:38.897 11:33:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.897 11:33:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:38.897 11:33:08 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:38.897 11:33:08 -- nvmf/common.sh@105 -- # continue 2 00:09:38.897 11:33:08 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:38.897 11:33:08 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:38.897 11:33:08 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:38.897 11:33:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:38.897 11:33:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:38.897 11:33:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:38.897 11:33:08 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:38.897 11:33:08 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:38.897 11:33:08 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:38.897 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:38.897 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:09:38.897 altname enp24s0f0np0 00:09:38.897 altname ens785f0np0 00:09:38.897 inet 192.168.100.8/24 scope global mlx_0_0 00:09:38.897 valid_lft forever preferred_lft forever 00:09:38.897 11:33:08 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:38.897 11:33:08 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:38.897 11:33:08 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:38.897 11:33:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:38.897 11:33:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:38.897 11:33:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:38.897 11:33:08 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:38.897 11:33:08 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:38.897 11:33:08 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:38.897 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:38.897 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:09:38.897 altname enp24s0f1np1 00:09:38.897 altname ens785f1np1 00:09:38.897 inet 192.168.100.9/24 scope global mlx_0_1 00:09:38.897 valid_lft forever preferred_lft forever 00:09:38.897 11:33:08 -- nvmf/common.sh@411 -- # return 0 00:09:38.897 11:33:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:38.897 11:33:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:38.897 11:33:08 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:38.897 11:33:08 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:38.897 11:33:08 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:38.897 11:33:08 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:38.897 11:33:08 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:38.897 11:33:08 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:38.897 11:33:08 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:38.897 11:33:08 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:38.897 11:33:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:38.897 11:33:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.897 11:33:08 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:38.897 11:33:08 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:38.897 11:33:08 -- nvmf/common.sh@105 -- # continue 2 00:09:38.897 11:33:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:38.897 11:33:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.897 11:33:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:38.897 11:33:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.897 11:33:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:38.897 11:33:08 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:38.897 11:33:08 -- nvmf/common.sh@105 -- # continue 2 00:09:38.897 11:33:08 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:38.897 11:33:08 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:38.897 11:33:08 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:38.897 11:33:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:38.897 11:33:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:38.897 11:33:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:38.897 11:33:08 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:38.897 11:33:08 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:38.897 11:33:08 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:38.897 11:33:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:38.897 11:33:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:38.897 11:33:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:38.897 11:33:08 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:38.897 192.168.100.9' 00:09:38.897 11:33:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:38.897 192.168.100.9' 00:09:38.897 11:33:08 -- nvmf/common.sh@446 -- # head -n 1 00:09:38.897 11:33:08 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:38.897 11:33:08 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:38.897 192.168.100.9' 00:09:38.897 11:33:08 -- nvmf/common.sh@447 -- # head -n 1 00:09:38.897 11:33:08 -- nvmf/common.sh@447 -- # tail -n +2 00:09:38.897 11:33:08 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:38.897 11:33:08 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:38.897 11:33:08 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:38.897 11:33:08 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:38.897 11:33:08 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:38.897 11:33:08 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:38.897 11:33:08 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:38.897 11:33:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:38.897 11:33:08 -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:38.897 11:33:08 -- common/autotest_common.sh@10 -- # set +x 00:09:38.897 11:33:08 -- nvmf/common.sh@470 -- # nvmfpid=2946623 00:09:38.897 11:33:08 -- nvmf/common.sh@471 -- # waitforlisten 2946623 00:09:38.897 11:33:08 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:38.897 11:33:08 -- common/autotest_common.sh@827 -- # '[' -z 2946623 ']' 00:09:38.897 11:33:08 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.897 11:33:08 -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:38.897 11:33:08 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.897 11:33:08 -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:38.897 11:33:08 -- common/autotest_common.sh@10 -- # set +x 00:09:38.897 [2024-05-15 11:33:08.813184] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:09:38.897 [2024-05-15 11:33:08.813244] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.897 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.897 [2024-05-15 11:33:08.885803] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:38.897 [2024-05-15 11:33:08.972714] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.897 [2024-05-15 11:33:08.972762] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.897 [2024-05-15 11:33:08.972771] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.897 [2024-05-15 11:33:08.972779] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.897 [2024-05-15 11:33:08.972786] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.897 [2024-05-15 11:33:08.972890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.897 [2024-05-15 11:33:08.972965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.897 [2024-05-15 11:33:08.972967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.897 11:33:09 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:38.897 11:33:09 -- common/autotest_common.sh@860 -- # return 0 00:09:38.897 11:33:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:38.897 11:33:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:38.897 11:33:09 -- common/autotest_common.sh@10 -- # set +x 00:09:39.156 11:33:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.156 11:33:09 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:09:39.156 11:33:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.156 11:33:09 -- common/autotest_common.sh@10 -- # set +x 00:09:39.156 [2024-05-15 11:33:09.714281] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2402700/0x2406bf0) succeed. 00:09:39.156 [2024-05-15 11:33:09.724829] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2403ca0/0x2448280) succeed. 00:09:39.156 11:33:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.156 11:33:09 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:39.156 11:33:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.156 11:33:09 -- common/autotest_common.sh@10 -- # set +x 00:09:39.156 Malloc0 00:09:39.156 11:33:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.156 11:33:09 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:39.156 11:33:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.156 11:33:09 -- common/autotest_common.sh@10 -- # set +x 00:09:39.156 Delay0 00:09:39.156 11:33:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.156 11:33:09 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:39.156 11:33:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.156 11:33:09 -- common/autotest_common.sh@10 -- # set +x 00:09:39.157 11:33:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.157 11:33:09 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:39.157 11:33:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.157 11:33:09 -- common/autotest_common.sh@10 -- # set +x 00:09:39.157 11:33:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.157 11:33:09 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:39.157 11:33:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.157 11:33:09 -- common/autotest_common.sh@10 -- # set +x 00:09:39.157 [2024-05-15 11:33:09.888831] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:39.157 [2024-05-15 11:33:09.889236] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:39.157 11:33:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.157 11:33:09 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:39.157 11:33:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.157 11:33:09 -- common/autotest_common.sh@10 -- # set +x 00:09:39.157 11:33:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.157 11:33:09 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:39.416 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.416 [2024-05-15 11:33:09.983811] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:41.322 Initializing NVMe Controllers 00:09:41.322 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:09:41.322 controller IO queue size 128 less than required 00:09:41.322 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:41.322 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:41.322 Initialization complete. Launching workers. 00:09:41.322 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 48385 00:09:41.322 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 48446, failed to submit 62 00:09:41.322 success 48386, unsuccess 60, failed 0 00:09:41.581 11:33:12 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:41.581 11:33:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.581 11:33:12 -- common/autotest_common.sh@10 -- # set +x 00:09:41.581 11:33:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.581 11:33:12 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:41.581 11:33:12 -- target/abort.sh@38 -- # nvmftestfini 00:09:41.581 11:33:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:41.581 11:33:12 -- nvmf/common.sh@117 -- # sync 00:09:41.581 11:33:12 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:41.581 11:33:12 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:41.581 11:33:12 -- nvmf/common.sh@120 -- # set +e 00:09:41.581 11:33:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:41.581 11:33:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:41.581 rmmod nvme_rdma 00:09:41.581 rmmod nvme_fabrics 00:09:41.581 11:33:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:41.581 11:33:12 -- nvmf/common.sh@124 -- # set -e 00:09:41.581 11:33:12 -- nvmf/common.sh@125 -- # return 0 00:09:41.581 11:33:12 -- nvmf/common.sh@478 -- # '[' -n 2946623 ']' 00:09:41.581 11:33:12 -- nvmf/common.sh@479 -- # killprocess 2946623 00:09:41.581 11:33:12 -- common/autotest_common.sh@946 -- # '[' -z 2946623 ']' 00:09:41.581 11:33:12 -- common/autotest_common.sh@950 -- # kill -0 2946623 00:09:41.581 11:33:12 -- common/autotest_common.sh@951 -- # uname 00:09:41.581 11:33:12 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:41.581 11:33:12 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2946623 00:09:41.581 11:33:12 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:41.581 11:33:12 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:41.581 11:33:12 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2946623' 00:09:41.581 killing process with pid 2946623 00:09:41.581 11:33:12 -- common/autotest_common.sh@965 -- # kill 2946623 00:09:41.581 [2024-05-15 11:33:12.194550] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:41.581 11:33:12 -- common/autotest_common.sh@970 -- # wait 2946623 00:09:41.581 [2024-05-15 11:33:12.266232] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:09:41.841 11:33:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:41.841 11:33:12 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:41.841 00:09:41.841 real 0m9.529s 00:09:41.841 user 0m14.198s 00:09:41.841 sys 0m4.818s 00:09:41.841 11:33:12 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:41.841 11:33:12 -- common/autotest_common.sh@10 -- # set +x 00:09:41.841 ************************************ 00:09:41.841 END TEST nvmf_abort 00:09:41.841 ************************************ 00:09:41.841 11:33:12 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:09:41.841 11:33:12 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:41.841 11:33:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:41.841 11:33:12 -- common/autotest_common.sh@10 -- # set +x 00:09:41.841 ************************************ 00:09:41.841 START TEST nvmf_ns_hotplug_stress 00:09:41.841 ************************************ 00:09:41.841 11:33:12 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:09:42.101 * Looking for test storage... 00:09:42.101 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:42.101 11:33:12 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:42.101 11:33:12 -- nvmf/common.sh@7 -- # uname -s 00:09:42.101 11:33:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.101 11:33:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.101 11:33:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.101 11:33:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.101 11:33:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.101 11:33:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.101 11:33:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.101 11:33:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.101 11:33:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.101 11:33:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.101 11:33:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:09:42.101 11:33:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:09:42.101 11:33:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.101 11:33:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.101 11:33:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:42.101 11:33:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.101 11:33:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:42.101 11:33:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.101 11:33:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.101 11:33:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.101 11:33:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.101 11:33:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.101 11:33:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.101 11:33:12 -- paths/export.sh@5 -- # export PATH 00:09:42.101 11:33:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.101 11:33:12 -- nvmf/common.sh@47 -- # : 0 00:09:42.101 11:33:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:42.101 11:33:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:42.101 11:33:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.101 11:33:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.101 11:33:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.101 11:33:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:42.101 11:33:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:42.101 11:33:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:42.101 11:33:12 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:42.101 11:33:12 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:42.101 11:33:12 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:42.101 11:33:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.101 11:33:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:42.101 11:33:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:42.101 11:33:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:42.101 11:33:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.101 11:33:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:42.101 11:33:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.101 11:33:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:42.101 11:33:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:42.101 11:33:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:42.101 11:33:12 -- common/autotest_common.sh@10 -- # set +x 00:09:48.667 11:33:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:48.667 11:33:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:48.667 11:33:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:48.667 11:33:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:48.667 11:33:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:48.667 11:33:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:48.667 11:33:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:48.667 11:33:18 -- nvmf/common.sh@295 -- # net_devs=() 00:09:48.667 11:33:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:48.667 11:33:18 -- nvmf/common.sh@296 -- # e810=() 00:09:48.667 11:33:18 -- nvmf/common.sh@296 -- # local -ga e810 00:09:48.667 11:33:18 -- nvmf/common.sh@297 -- # x722=() 00:09:48.667 11:33:18 -- nvmf/common.sh@297 -- # local -ga x722 00:09:48.667 11:33:18 -- nvmf/common.sh@298 -- # mlx=() 00:09:48.667 11:33:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:48.667 11:33:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:48.667 11:33:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:48.667 11:33:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:48.667 11:33:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:48.667 11:33:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:48.667 11:33:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:48.667 11:33:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:48.667 11:33:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:48.667 11:33:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:48.667 11:33:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:48.667 11:33:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:48.667 11:33:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:48.667 11:33:18 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:48.667 11:33:18 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:48.667 11:33:18 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:48.667 11:33:18 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:48.667 11:33:18 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:48.667 11:33:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:48.667 11:33:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:48.667 11:33:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:48.667 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:48.667 11:33:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:48.667 11:33:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:48.667 11:33:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:48.667 11:33:18 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:48.667 11:33:18 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:48.667 11:33:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:48.668 11:33:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:48.668 11:33:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:48.668 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:48.668 11:33:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:48.668 11:33:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:48.668 11:33:18 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:48.668 11:33:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.668 11:33:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:48.668 11:33:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.668 11:33:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:48.668 Found net devices under 0000:18:00.0: mlx_0_0 00:09:48.668 11:33:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.668 11:33:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:48.668 11:33:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.668 11:33:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:48.668 11:33:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.668 11:33:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:48.668 Found net devices under 0000:18:00.1: mlx_0_1 00:09:48.668 11:33:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.668 11:33:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:48.668 11:33:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:48.668 11:33:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:48.668 11:33:18 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:48.668 11:33:18 -- nvmf/common.sh@58 -- # uname 00:09:48.668 11:33:18 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:48.668 11:33:18 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:48.668 11:33:18 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:48.668 11:33:18 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:48.668 11:33:18 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:48.668 11:33:18 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:48.668 11:33:18 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:48.668 11:33:18 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:48.668 11:33:18 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:48.668 11:33:18 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:48.668 11:33:18 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:48.668 11:33:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:48.668 11:33:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:48.668 11:33:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:48.668 11:33:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:48.668 11:33:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:48.668 11:33:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:48.668 11:33:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:48.668 11:33:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:48.668 11:33:18 -- nvmf/common.sh@105 -- # continue 2 00:09:48.668 11:33:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:48.668 11:33:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:48.668 11:33:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:48.668 11:33:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:48.668 11:33:18 -- nvmf/common.sh@105 -- # continue 2 00:09:48.668 11:33:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:48.668 11:33:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:48.668 11:33:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:48.668 11:33:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:48.668 11:33:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:48.668 11:33:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:48.668 11:33:18 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:48.668 11:33:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:48.668 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:48.668 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:09:48.668 altname enp24s0f0np0 00:09:48.668 altname ens785f0np0 00:09:48.668 inet 192.168.100.8/24 scope global mlx_0_0 00:09:48.668 valid_lft forever preferred_lft forever 00:09:48.668 11:33:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:48.668 11:33:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:48.668 11:33:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:48.668 11:33:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:48.668 11:33:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:48.668 11:33:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:48.668 11:33:18 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:48.668 11:33:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:48.668 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:48.668 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:09:48.668 altname enp24s0f1np1 00:09:48.668 altname ens785f1np1 00:09:48.668 inet 192.168.100.9/24 scope global mlx_0_1 00:09:48.668 valid_lft forever preferred_lft forever 00:09:48.668 11:33:18 -- nvmf/common.sh@411 -- # return 0 00:09:48.668 11:33:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:48.668 11:33:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:48.668 11:33:18 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:48.668 11:33:18 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:48.668 11:33:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:48.668 11:33:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:48.668 11:33:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:48.668 11:33:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:48.668 11:33:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:48.668 11:33:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:48.668 11:33:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:48.668 11:33:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:48.668 11:33:18 -- nvmf/common.sh@105 -- # continue 2 00:09:48.668 11:33:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:48.668 11:33:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:48.668 11:33:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:48.668 11:33:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:48.668 11:33:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:48.668 11:33:18 -- nvmf/common.sh@105 -- # continue 2 00:09:48.668 11:33:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:48.668 11:33:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:48.668 11:33:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:48.668 11:33:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:48.668 11:33:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:48.668 11:33:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:48.668 11:33:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:48.668 11:33:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:48.668 11:33:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:48.668 11:33:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:48.668 11:33:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:48.668 11:33:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:48.668 11:33:18 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:48.668 192.168.100.9' 00:09:48.668 11:33:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:48.668 192.168.100.9' 00:09:48.668 11:33:18 -- nvmf/common.sh@446 -- # head -n 1 00:09:48.668 11:33:18 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:48.668 11:33:18 -- nvmf/common.sh@447 -- # head -n 1 00:09:48.668 11:33:18 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:48.668 192.168.100.9' 00:09:48.668 11:33:18 -- nvmf/common.sh@447 -- # tail -n +2 00:09:48.668 11:33:18 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:48.668 11:33:18 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:48.668 11:33:18 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:48.668 11:33:18 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:48.668 11:33:18 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:48.668 11:33:18 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:48.668 11:33:18 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:48.668 11:33:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:48.668 11:33:18 -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:48.668 11:33:18 -- common/autotest_common.sh@10 -- # set +x 00:09:48.668 11:33:18 -- nvmf/common.sh@470 -- # nvmfpid=2950053 00:09:48.668 11:33:18 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:48.668 11:33:18 -- nvmf/common.sh@471 -- # waitforlisten 2950053 00:09:48.668 11:33:18 -- common/autotest_common.sh@827 -- # '[' -z 2950053 ']' 00:09:48.668 11:33:18 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.668 11:33:18 -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:48.668 11:33:18 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.668 11:33:18 -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:48.668 11:33:18 -- common/autotest_common.sh@10 -- # set +x 00:09:48.668 [2024-05-15 11:33:18.757455] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:09:48.668 [2024-05-15 11:33:18.757512] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.668 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.668 [2024-05-15 11:33:18.830196] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:48.668 [2024-05-15 11:33:18.920408] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.668 [2024-05-15 11:33:18.920449] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.669 [2024-05-15 11:33:18.920458] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.669 [2024-05-15 11:33:18.920467] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.669 [2024-05-15 11:33:18.920474] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.669 [2024-05-15 11:33:18.920568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.669 [2024-05-15 11:33:18.920646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.669 [2024-05-15 11:33:18.920648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.927 11:33:19 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:48.927 11:33:19 -- common/autotest_common.sh@860 -- # return 0 00:09:48.927 11:33:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:48.927 11:33:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:48.927 11:33:19 -- common/autotest_common.sh@10 -- # set +x 00:09:48.927 11:33:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.927 11:33:19 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:48.928 11:33:19 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:49.186 [2024-05-15 11:33:19.797631] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x158e700/0x1592bf0) succeed. 00:09:49.186 [2024-05-15 11:33:19.808203] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x158fca0/0x15d4280) succeed. 00:09:49.186 11:33:19 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:49.445 11:33:20 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:49.704 [2024-05-15 11:33:20.306896] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:49.704 [2024-05-15 11:33:20.307233] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:49.704 11:33:20 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:49.962 11:33:20 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:49.962 Malloc0 00:09:49.962 11:33:20 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:50.220 Delay0 00:09:50.220 11:33:20 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.479 11:33:21 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:50.479 NULL1 00:09:50.738 11:33:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:50.738 11:33:21 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2950456 00:09:50.738 11:33:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:09:50.738 11:33:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.738 11:33:21 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:50.738 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.116 Read completed with error (sct=0, sc=11) 00:09:52.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.116 11:33:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.116 11:33:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:52.116 11:33:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:52.374 true 00:09:52.374 11:33:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:09:52.374 11:33:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.309 11:33:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.309 11:33:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:53.309 11:33:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:53.567 true 00:09:53.567 11:33:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:09:53.567 11:33:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.503 11:33:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.503 11:33:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:54.503 11:33:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:54.800 true 00:09:54.800 11:33:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:09:54.800 11:33:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.753 11:33:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.753 11:33:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:55.753 11:33:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:56.011 true 00:09:56.011 11:33:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:09:56.011 11:33:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.947 11:33:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.947 11:33:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:56.947 11:33:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:57.205 true 00:09:57.205 11:33:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:09:57.205 11:33:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.142 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.142 11:33:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.142 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.142 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.142 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.142 11:33:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:58.142 11:33:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:58.400 true 00:09:58.400 11:33:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:09:58.400 11:33:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.338 11:33:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.338 11:33:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:59.338 11:33:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:59.597 true 00:09:59.597 11:33:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:09:59.597 11:33:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.534 11:33:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.534 11:33:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:00.534 11:33:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:00.792 true 00:10:00.793 11:33:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:00.793 11:33:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.730 11:33:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.730 11:33:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:01.730 11:33:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:01.988 true 00:10:01.988 11:33:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:01.988 11:33:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.929 11:33:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.929 11:33:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:02.929 11:33:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:03.188 true 00:10:03.188 11:33:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:03.188 11:33:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.124 11:33:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.124 11:33:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:04.124 11:33:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:04.383 true 00:10:04.383 11:33:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:04.383 11:33:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.318 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.319 11:33:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.577 11:33:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:05.577 11:33:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:05.577 true 00:10:05.577 11:33:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:05.577 11:33:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.534 11:33:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.534 11:33:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:06.534 11:33:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:06.793 true 00:10:06.793 11:33:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:06.793 11:33:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.731 11:33:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.731 11:33:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:07.731 11:33:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:07.990 true 00:10:07.990 11:33:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:07.990 11:33:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.928 11:33:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.928 11:33:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:08.928 11:33:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:09.188 true 00:10:09.188 11:33:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:09.188 11:33:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.123 11:33:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.123 11:33:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:10.123 11:33:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:10.382 true 00:10:10.382 11:33:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:10.382 11:33:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.318 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.318 11:33:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.318 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.318 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.318 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.576 11:33:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:11.576 11:33:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:11.576 true 00:10:11.835 11:33:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:11.835 11:33:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.771 11:33:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.771 11:33:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:12.771 11:33:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:12.771 true 00:10:13.030 11:33:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:13.030 11:33:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.857 11:33:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.857 11:33:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:13.857 11:33:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:14.116 true 00:10:14.116 11:33:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:14.116 11:33:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.051 11:33:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.052 11:33:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:15.052 11:33:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:15.310 true 00:10:15.310 11:33:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:15.310 11:33:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.257 11:33:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.257 11:33:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:16.257 11:33:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:16.515 true 00:10:16.515 11:33:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:16.515 11:33:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.453 11:33:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.711 11:33:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:17.711 11:33:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:17.711 true 00:10:17.711 11:33:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:17.711 11:33:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.648 11:33:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.906 11:33:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:18.906 11:33:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:18.906 true 00:10:18.906 11:33:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:18.906 11:33:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.842 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.842 11:33:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.842 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.842 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.842 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.842 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.842 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.842 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.101 11:33:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:20.101 11:33:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:20.101 true 00:10:20.101 11:33:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:20.101 11:33:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.038 11:33:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.297 11:33:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:21.297 11:33:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:21.297 true 00:10:21.297 11:33:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:21.297 11:33:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.591 11:33:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.887 11:33:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:21.887 11:33:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:21.887 true 00:10:21.887 11:33:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:21.887 11:33:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.145 11:33:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.403 11:33:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:22.403 11:33:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:22.403 true 00:10:22.661 11:33:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:22.661 11:33:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.661 11:33:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.920 11:33:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:22.920 11:33:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:23.178 true 00:10:23.178 Initializing NVMe Controllers 00:10:23.178 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:23.178 Controller IO queue size 128, less than required. 00:10:23.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:23.178 Controller IO queue size 128, less than required. 00:10:23.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:23.178 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:23.178 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:23.178 Initialization complete. Launching workers. 00:10:23.178 ======================================================== 00:10:23.178 Latency(us) 00:10:23.178 Device Information : IOPS MiB/s Average min max 00:10:23.178 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5772.13 2.82 19323.53 911.52 1139476.37 00:10:23.178 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 34015.74 16.61 3762.99 2288.91 294721.54 00:10:23.178 ======================================================== 00:10:23.178 Total : 39787.87 19.43 6020.40 911.52 1139476.37 00:10:23.178 00:10:23.178 11:33:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950456 00:10:23.178 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2950456) - No such process 00:10:23.178 11:33:53 -- target/ns_hotplug_stress.sh@53 -- # wait 2950456 00:10:23.178 11:33:53 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.178 11:33:53 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:23.436 11:33:54 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:23.436 11:33:54 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:23.436 11:33:54 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:23.436 11:33:54 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:23.436 11:33:54 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:23.694 null0 00:10:23.694 11:33:54 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:23.694 11:33:54 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:23.694 11:33:54 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:23.954 null1 00:10:23.954 11:33:54 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:23.954 11:33:54 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:23.954 11:33:54 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:23.954 null2 00:10:23.954 11:33:54 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:23.954 11:33:54 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:23.954 11:33:54 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:24.213 null3 00:10:24.213 11:33:54 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.213 11:33:54 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.213 11:33:54 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:24.471 null4 00:10:24.471 11:33:55 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.471 11:33:55 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.471 11:33:55 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:24.471 null5 00:10:24.471 11:33:55 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.471 11:33:55 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.471 11:33:55 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:24.730 null6 00:10:24.730 11:33:55 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.730 11:33:55 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.730 11:33:55 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:24.989 null7 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@66 -- # wait 2955082 2955083 2955085 2955087 2955089 2955091 2955093 2955095 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:24.989 11:33:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:25.249 11:33:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.249 11:33:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:25.249 11:33:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:25.249 11:33:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:25.249 11:33:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:25.249 11:33:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:25.249 11:33:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:25.249 11:33:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.507 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.508 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:25.508 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:25.508 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.508 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:25.508 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:25.508 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:25.508 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:25.508 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:25.508 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:25.766 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.766 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.766 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:25.766 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.766 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.766 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:25.766 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.766 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.766 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:25.766 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.766 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.766 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.766 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.766 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:25.766 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:25.766 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.766 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.767 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:25.767 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.767 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.767 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:25.767 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:25.767 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:25.767 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:26.025 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:26.025 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:26.025 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.025 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:26.025 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:26.025 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:26.025 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:26.025 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:26.025 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.025 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.025 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:26.285 11:33:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:26.285 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:26.285 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.544 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:26.803 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:27.062 11:33:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.321 11:33:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.580 11:33:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:27.840 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.840 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.840 11:33:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:27.840 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.840 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.840 11:33:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:27.840 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.840 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.840 11:33:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:27.840 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:27.840 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:27.840 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:27.840 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:27.840 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:27.840 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.840 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:27.840 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.100 11:33:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:28.359 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:28.359 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:28.359 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:28.360 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:28.360 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:28.360 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:28.360 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.360 11:33:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:28.360 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.360 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.360 11:33:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:28.360 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.360 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.360 11:33:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:28.620 11:33:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:28.879 11:33:59 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:28.879 11:33:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:28.879 11:33:59 -- nvmf/common.sh@117 -- # sync 00:10:28.879 11:33:59 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:28.879 11:33:59 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:28.879 11:33:59 -- nvmf/common.sh@120 -- # set +e 00:10:28.879 11:33:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:28.879 11:33:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:28.879 rmmod nvme_rdma 00:10:28.879 rmmod nvme_fabrics 00:10:28.879 11:33:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:28.879 11:33:59 -- nvmf/common.sh@124 -- # set -e 00:10:28.879 11:33:59 -- nvmf/common.sh@125 -- # return 0 00:10:28.879 11:33:59 -- nvmf/common.sh@478 -- # '[' -n 2950053 ']' 00:10:28.879 11:33:59 -- nvmf/common.sh@479 -- # killprocess 2950053 00:10:28.879 11:33:59 -- common/autotest_common.sh@946 -- # '[' -z 2950053 ']' 00:10:28.879 11:33:59 -- common/autotest_common.sh@950 -- # kill -0 2950053 00:10:28.879 11:33:59 -- common/autotest_common.sh@951 -- # uname 00:10:28.879 11:33:59 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:28.879 11:33:59 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2950053 00:10:29.138 11:33:59 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:29.138 11:33:59 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:29.138 11:33:59 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2950053' 00:10:29.138 killing process with pid 2950053 00:10:29.138 11:33:59 -- common/autotest_common.sh@965 -- # kill 2950053 00:10:29.138 [2024-05-15 11:33:59.651970] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:29.138 11:33:59 -- common/autotest_common.sh@970 -- # wait 2950053 00:10:29.138 [2024-05-15 11:33:59.720350] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:10:29.397 11:33:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:29.397 11:33:59 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:29.397 00:10:29.397 real 0m47.369s 00:10:29.397 user 3m15.807s 00:10:29.397 sys 0m13.929s 00:10:29.397 11:33:59 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:29.397 11:33:59 -- common/autotest_common.sh@10 -- # set +x 00:10:29.397 ************************************ 00:10:29.397 END TEST nvmf_ns_hotplug_stress 00:10:29.397 ************************************ 00:10:29.397 11:34:00 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:10:29.397 11:34:00 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:29.397 11:34:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:29.397 11:34:00 -- common/autotest_common.sh@10 -- # set +x 00:10:29.397 ************************************ 00:10:29.397 START TEST nvmf_connect_stress 00:10:29.397 ************************************ 00:10:29.397 11:34:00 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:10:29.397 * Looking for test storage... 00:10:29.397 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:29.397 11:34:00 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.397 11:34:00 -- nvmf/common.sh@7 -- # uname -s 00:10:29.397 11:34:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.397 11:34:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.397 11:34:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.397 11:34:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.397 11:34:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.397 11:34:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.397 11:34:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.397 11:34:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.397 11:34:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.397 11:34:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.656 11:34:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:10:29.656 11:34:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:10:29.656 11:34:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.656 11:34:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.656 11:34:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.656 11:34:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.656 11:34:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:29.656 11:34:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.656 11:34:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.656 11:34:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.656 11:34:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.657 11:34:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.657 11:34:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.657 11:34:00 -- paths/export.sh@5 -- # export PATH 00:10:29.657 11:34:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.657 11:34:00 -- nvmf/common.sh@47 -- # : 0 00:10:29.657 11:34:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:29.657 11:34:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:29.657 11:34:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.657 11:34:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.657 11:34:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.657 11:34:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:29.657 11:34:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:29.657 11:34:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:29.657 11:34:00 -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:29.657 11:34:00 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:29.657 11:34:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.657 11:34:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:29.657 11:34:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:29.657 11:34:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:29.657 11:34:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.657 11:34:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:29.657 11:34:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.657 11:34:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:29.657 11:34:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:29.657 11:34:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:29.657 11:34:00 -- common/autotest_common.sh@10 -- # set +x 00:10:36.231 11:34:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:36.231 11:34:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:36.231 11:34:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:36.231 11:34:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:36.231 11:34:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:36.231 11:34:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:36.231 11:34:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:36.231 11:34:05 -- nvmf/common.sh@295 -- # net_devs=() 00:10:36.231 11:34:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:36.231 11:34:05 -- nvmf/common.sh@296 -- # e810=() 00:10:36.231 11:34:05 -- nvmf/common.sh@296 -- # local -ga e810 00:10:36.231 11:34:05 -- nvmf/common.sh@297 -- # x722=() 00:10:36.231 11:34:05 -- nvmf/common.sh@297 -- # local -ga x722 00:10:36.231 11:34:05 -- nvmf/common.sh@298 -- # mlx=() 00:10:36.231 11:34:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:36.231 11:34:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.231 11:34:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.231 11:34:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.231 11:34:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.231 11:34:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.231 11:34:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.232 11:34:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.232 11:34:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.232 11:34:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.232 11:34:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.232 11:34:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.232 11:34:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:36.232 11:34:05 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:36.232 11:34:05 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:36.232 11:34:05 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:36.232 11:34:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:36.232 11:34:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.232 11:34:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:36.232 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:36.232 11:34:05 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:36.232 11:34:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.232 11:34:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:36.232 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:36.232 11:34:05 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:36.232 11:34:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:36.232 11:34:05 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.232 11:34:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.232 11:34:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:36.232 11:34:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.232 11:34:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:36.232 Found net devices under 0000:18:00.0: mlx_0_0 00:10:36.232 11:34:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.232 11:34:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.232 11:34:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.232 11:34:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:36.232 11:34:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.232 11:34:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:36.232 Found net devices under 0000:18:00.1: mlx_0_1 00:10:36.232 11:34:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.232 11:34:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:36.232 11:34:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:36.232 11:34:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:36.232 11:34:05 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:36.232 11:34:05 -- nvmf/common.sh@58 -- # uname 00:10:36.232 11:34:05 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:36.232 11:34:05 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:36.232 11:34:05 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:36.232 11:34:05 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:36.232 11:34:05 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:36.232 11:34:05 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:36.232 11:34:05 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:36.232 11:34:05 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:36.232 11:34:05 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:36.232 11:34:05 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:36.232 11:34:05 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:36.232 11:34:05 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:36.232 11:34:05 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:36.232 11:34:05 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:36.232 11:34:05 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:36.232 11:34:05 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:36.232 11:34:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:36.232 11:34:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.232 11:34:05 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:36.232 11:34:05 -- nvmf/common.sh@105 -- # continue 2 00:10:36.232 11:34:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:36.232 11:34:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.232 11:34:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.232 11:34:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:36.232 11:34:05 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:36.232 11:34:05 -- nvmf/common.sh@105 -- # continue 2 00:10:36.232 11:34:05 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:36.232 11:34:05 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:36.232 11:34:05 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:36.232 11:34:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:36.233 11:34:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:36.233 11:34:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:36.233 11:34:05 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:36.233 11:34:05 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:36.233 11:34:05 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:36.233 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:36.233 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:10:36.233 altname enp24s0f0np0 00:10:36.233 altname ens785f0np0 00:10:36.233 inet 192.168.100.8/24 scope global mlx_0_0 00:10:36.233 valid_lft forever preferred_lft forever 00:10:36.233 11:34:05 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:36.233 11:34:05 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:36.233 11:34:05 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:36.233 11:34:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:36.233 11:34:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:36.233 11:34:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:36.233 11:34:05 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:36.233 11:34:05 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:36.233 11:34:05 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:36.233 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:36.233 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:10:36.233 altname enp24s0f1np1 00:10:36.233 altname ens785f1np1 00:10:36.233 inet 192.168.100.9/24 scope global mlx_0_1 00:10:36.233 valid_lft forever preferred_lft forever 00:10:36.233 11:34:05 -- nvmf/common.sh@411 -- # return 0 00:10:36.233 11:34:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:36.233 11:34:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:36.233 11:34:05 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:36.233 11:34:05 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:36.233 11:34:05 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:36.233 11:34:05 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:36.233 11:34:05 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:36.233 11:34:05 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:36.233 11:34:05 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:36.233 11:34:06 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:36.233 11:34:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:36.233 11:34:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.233 11:34:06 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:36.233 11:34:06 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:36.233 11:34:06 -- nvmf/common.sh@105 -- # continue 2 00:10:36.233 11:34:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:36.233 11:34:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.233 11:34:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:36.233 11:34:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.233 11:34:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:36.233 11:34:06 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:36.233 11:34:06 -- nvmf/common.sh@105 -- # continue 2 00:10:36.233 11:34:06 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:36.233 11:34:06 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:36.233 11:34:06 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:36.233 11:34:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:36.233 11:34:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:36.233 11:34:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:36.233 11:34:06 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:36.233 11:34:06 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:36.233 11:34:06 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:36.233 11:34:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:36.233 11:34:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:36.233 11:34:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:36.233 11:34:06 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:36.233 192.168.100.9' 00:10:36.233 11:34:06 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:36.233 192.168.100.9' 00:10:36.233 11:34:06 -- nvmf/common.sh@446 -- # head -n 1 00:10:36.233 11:34:06 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:36.233 11:34:06 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:36.233 192.168.100.9' 00:10:36.233 11:34:06 -- nvmf/common.sh@447 -- # tail -n +2 00:10:36.233 11:34:06 -- nvmf/common.sh@447 -- # head -n 1 00:10:36.233 11:34:06 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:36.233 11:34:06 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:36.233 11:34:06 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:36.233 11:34:06 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:36.235 11:34:06 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:36.235 11:34:06 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:36.235 11:34:06 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:36.235 11:34:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:36.235 11:34:06 -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:36.235 11:34:06 -- common/autotest_common.sh@10 -- # set +x 00:10:36.235 11:34:06 -- nvmf/common.sh@470 -- # nvmfpid=2958594 00:10:36.235 11:34:06 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:36.235 11:34:06 -- nvmf/common.sh@471 -- # waitforlisten 2958594 00:10:36.235 11:34:06 -- common/autotest_common.sh@827 -- # '[' -z 2958594 ']' 00:10:36.235 11:34:06 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.235 11:34:06 -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:36.235 11:34:06 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.235 11:34:06 -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:36.235 11:34:06 -- common/autotest_common.sh@10 -- # set +x 00:10:36.235 [2024-05-15 11:34:06.156495] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:10:36.235 [2024-05-15 11:34:06.156559] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.235 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.235 [2024-05-15 11:34:06.231552] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:36.235 [2024-05-15 11:34:06.323088] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.235 [2024-05-15 11:34:06.323133] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.235 [2024-05-15 11:34:06.323143] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.235 [2024-05-15 11:34:06.323152] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.235 [2024-05-15 11:34:06.323160] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.235 [2024-05-15 11:34:06.323212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.235 [2024-05-15 11:34:06.323278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.235 [2024-05-15 11:34:06.323280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.235 11:34:06 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:36.235 11:34:06 -- common/autotest_common.sh@860 -- # return 0 00:10:36.235 11:34:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:36.235 11:34:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:36.235 11:34:06 -- common/autotest_common.sh@10 -- # set +x 00:10:36.496 11:34:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.496 11:34:07 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:36.496 11:34:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.496 11:34:07 -- common/autotest_common.sh@10 -- # set +x 00:10:36.496 [2024-05-15 11:34:07.042638] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2410700/0x2414bf0) succeed. 00:10:36.496 [2024-05-15 11:34:07.052909] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2411ca0/0x2456280) succeed. 00:10:36.496 11:34:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.496 11:34:07 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:36.496 11:34:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.496 11:34:07 -- common/autotest_common.sh@10 -- # set +x 00:10:36.496 11:34:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.496 11:34:07 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:36.496 11:34:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.496 11:34:07 -- common/autotest_common.sh@10 -- # set +x 00:10:36.496 [2024-05-15 11:34:07.169651] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:36.496 [2024-05-15 11:34:07.170021] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:36.496 11:34:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.496 11:34:07 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:36.496 11:34:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.496 11:34:07 -- common/autotest_common.sh@10 -- # set +x 00:10:36.496 NULL1 00:10:36.496 11:34:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.496 11:34:07 -- target/connect_stress.sh@21 -- # PERF_PID=2958717 00:10:36.496 11:34:07 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:36.496 11:34:07 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:36.496 11:34:07 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:36.496 11:34:07 -- target/connect_stress.sh@27 -- # seq 1 20 00:10:36.496 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.496 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.496 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.496 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.496 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.496 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.496 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.496 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.496 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.496 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.496 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.496 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.497 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.497 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.497 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.497 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.497 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.497 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.497 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.497 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.497 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.497 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.497 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.497 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.497 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.757 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.757 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.757 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.757 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.757 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.757 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.757 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.757 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.757 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.757 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.757 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.757 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.757 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.757 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.757 11:34:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:36.757 11:34:07 -- target/connect_stress.sh@28 -- # cat 00:10:36.757 11:34:07 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:36.757 11:34:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:36.757 11:34:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.757 11:34:07 -- common/autotest_common.sh@10 -- # set +x 00:10:37.015 11:34:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.015 11:34:07 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:37.015 11:34:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.015 11:34:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.015 11:34:07 -- common/autotest_common.sh@10 -- # set +x 00:10:37.275 11:34:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.275 11:34:07 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:37.275 11:34:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.275 11:34:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.275 11:34:07 -- common/autotest_common.sh@10 -- # set +x 00:10:37.533 11:34:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.533 11:34:08 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:37.533 11:34:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.533 11:34:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.533 11:34:08 -- common/autotest_common.sh@10 -- # set +x 00:10:38.102 11:34:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.102 11:34:08 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:38.102 11:34:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.102 11:34:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.102 11:34:08 -- common/autotest_common.sh@10 -- # set +x 00:10:38.360 11:34:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.360 11:34:08 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:38.360 11:34:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.360 11:34:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.360 11:34:08 -- common/autotest_common.sh@10 -- # set +x 00:10:38.619 11:34:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.619 11:34:09 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:38.619 11:34:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.619 11:34:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.619 11:34:09 -- common/autotest_common.sh@10 -- # set +x 00:10:38.879 11:34:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.879 11:34:09 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:38.879 11:34:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.879 11:34:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.879 11:34:09 -- common/autotest_common.sh@10 -- # set +x 00:10:39.447 11:34:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.447 11:34:09 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:39.447 11:34:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.447 11:34:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.447 11:34:09 -- common/autotest_common.sh@10 -- # set +x 00:10:39.706 11:34:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.706 11:34:10 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:39.706 11:34:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.706 11:34:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.706 11:34:10 -- common/autotest_common.sh@10 -- # set +x 00:10:39.966 11:34:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.966 11:34:10 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:39.966 11:34:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.966 11:34:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.966 11:34:10 -- common/autotest_common.sh@10 -- # set +x 00:10:40.225 11:34:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.225 11:34:10 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:40.225 11:34:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.225 11:34:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.225 11:34:10 -- common/autotest_common.sh@10 -- # set +x 00:10:40.484 11:34:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.484 11:34:11 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:40.484 11:34:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.484 11:34:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.484 11:34:11 -- common/autotest_common.sh@10 -- # set +x 00:10:41.061 11:34:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.061 11:34:11 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:41.061 11:34:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.061 11:34:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.061 11:34:11 -- common/autotest_common.sh@10 -- # set +x 00:10:41.318 11:34:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.318 11:34:11 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:41.318 11:34:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.318 11:34:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.318 11:34:11 -- common/autotest_common.sh@10 -- # set +x 00:10:41.576 11:34:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.576 11:34:12 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:41.576 11:34:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.576 11:34:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.576 11:34:12 -- common/autotest_common.sh@10 -- # set +x 00:10:41.835 11:34:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.835 11:34:12 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:41.835 11:34:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.835 11:34:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.835 11:34:12 -- common/autotest_common.sh@10 -- # set +x 00:10:42.093 11:34:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.093 11:34:12 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:42.093 11:34:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:42.093 11:34:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.093 11:34:12 -- common/autotest_common.sh@10 -- # set +x 00:10:42.658 11:34:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.658 11:34:13 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:42.658 11:34:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:42.658 11:34:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.658 11:34:13 -- common/autotest_common.sh@10 -- # set +x 00:10:42.916 11:34:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.916 11:34:13 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:42.916 11:34:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:42.916 11:34:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.916 11:34:13 -- common/autotest_common.sh@10 -- # set +x 00:10:43.176 11:34:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.176 11:34:13 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:43.176 11:34:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:43.176 11:34:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.176 11:34:13 -- common/autotest_common.sh@10 -- # set +x 00:10:43.434 11:34:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.434 11:34:14 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:43.434 11:34:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:43.434 11:34:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.434 11:34:14 -- common/autotest_common.sh@10 -- # set +x 00:10:44.002 11:34:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.002 11:34:14 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:44.002 11:34:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.002 11:34:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.002 11:34:14 -- common/autotest_common.sh@10 -- # set +x 00:10:44.261 11:34:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.261 11:34:14 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:44.261 11:34:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.261 11:34:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.261 11:34:14 -- common/autotest_common.sh@10 -- # set +x 00:10:44.520 11:34:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.520 11:34:15 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:44.520 11:34:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.520 11:34:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.520 11:34:15 -- common/autotest_common.sh@10 -- # set +x 00:10:44.779 11:34:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.779 11:34:15 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:44.779 11:34:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.779 11:34:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.779 11:34:15 -- common/autotest_common.sh@10 -- # set +x 00:10:45.038 11:34:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.038 11:34:15 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:45.038 11:34:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.038 11:34:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.038 11:34:15 -- common/autotest_common.sh@10 -- # set +x 00:10:45.606 11:34:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.606 11:34:16 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:45.606 11:34:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.606 11:34:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.606 11:34:16 -- common/autotest_common.sh@10 -- # set +x 00:10:45.864 11:34:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.864 11:34:16 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:45.864 11:34:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.864 11:34:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.864 11:34:16 -- common/autotest_common.sh@10 -- # set +x 00:10:46.123 11:34:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.123 11:34:16 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:46.123 11:34:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.123 11:34:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.123 11:34:16 -- common/autotest_common.sh@10 -- # set +x 00:10:46.382 11:34:17 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.382 11:34:17 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:46.382 11:34:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.382 11:34:17 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.382 11:34:17 -- common/autotest_common.sh@10 -- # set +x 00:10:46.951 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:46.951 11:34:17 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.951 11:34:17 -- target/connect_stress.sh@34 -- # kill -0 2958717 00:10:46.951 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2958717) - No such process 00:10:46.951 11:34:17 -- target/connect_stress.sh@38 -- # wait 2958717 00:10:46.951 11:34:17 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:46.951 11:34:17 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:46.951 11:34:17 -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:46.951 11:34:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:46.951 11:34:17 -- nvmf/common.sh@117 -- # sync 00:10:46.951 11:34:17 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:46.951 11:34:17 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:46.951 11:34:17 -- nvmf/common.sh@120 -- # set +e 00:10:46.951 11:34:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:46.951 11:34:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:46.951 rmmod nvme_rdma 00:10:46.951 rmmod nvme_fabrics 00:10:46.951 11:34:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:46.951 11:34:17 -- nvmf/common.sh@124 -- # set -e 00:10:46.951 11:34:17 -- nvmf/common.sh@125 -- # return 0 00:10:46.951 11:34:17 -- nvmf/common.sh@478 -- # '[' -n 2958594 ']' 00:10:46.951 11:34:17 -- nvmf/common.sh@479 -- # killprocess 2958594 00:10:46.951 11:34:17 -- common/autotest_common.sh@946 -- # '[' -z 2958594 ']' 00:10:46.951 11:34:17 -- common/autotest_common.sh@950 -- # kill -0 2958594 00:10:46.951 11:34:17 -- common/autotest_common.sh@951 -- # uname 00:10:46.951 11:34:17 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:46.951 11:34:17 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2958594 00:10:46.951 11:34:17 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:46.951 11:34:17 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:46.951 11:34:17 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2958594' 00:10:46.951 killing process with pid 2958594 00:10:46.951 11:34:17 -- common/autotest_common.sh@965 -- # kill 2958594 00:10:46.951 [2024-05-15 11:34:17.522541] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:46.951 11:34:17 -- common/autotest_common.sh@970 -- # wait 2958594 00:10:46.951 [2024-05-15 11:34:17.594927] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:10:47.210 11:34:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:47.210 11:34:17 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:47.210 00:10:47.210 real 0m17.786s 00:10:47.210 user 0m41.544s 00:10:47.210 sys 0m7.198s 00:10:47.210 11:34:17 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:47.210 11:34:17 -- common/autotest_common.sh@10 -- # set +x 00:10:47.210 ************************************ 00:10:47.210 END TEST nvmf_connect_stress 00:10:47.210 ************************************ 00:10:47.210 11:34:17 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:10:47.210 11:34:17 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:47.210 11:34:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:47.210 11:34:17 -- common/autotest_common.sh@10 -- # set +x 00:10:47.210 ************************************ 00:10:47.210 START TEST nvmf_fused_ordering 00:10:47.210 ************************************ 00:10:47.210 11:34:17 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:10:47.469 * Looking for test storage... 00:10:47.469 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:47.469 11:34:18 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.469 11:34:18 -- nvmf/common.sh@7 -- # uname -s 00:10:47.469 11:34:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.469 11:34:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.469 11:34:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.469 11:34:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.469 11:34:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.469 11:34:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.469 11:34:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.469 11:34:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.469 11:34:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.469 11:34:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.469 11:34:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:10:47.469 11:34:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:10:47.469 11:34:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.469 11:34:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.469 11:34:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.469 11:34:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.469 11:34:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:47.469 11:34:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.469 11:34:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.469 11:34:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.469 11:34:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.470 11:34:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.470 11:34:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.470 11:34:18 -- paths/export.sh@5 -- # export PATH 00:10:47.470 11:34:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.470 11:34:18 -- nvmf/common.sh@47 -- # : 0 00:10:47.470 11:34:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:47.470 11:34:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:47.470 11:34:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.470 11:34:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.470 11:34:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.470 11:34:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:47.470 11:34:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:47.470 11:34:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:47.470 11:34:18 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:47.470 11:34:18 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:47.470 11:34:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.470 11:34:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:47.470 11:34:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:47.470 11:34:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:47.470 11:34:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.470 11:34:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:47.470 11:34:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.470 11:34:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:47.470 11:34:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:47.470 11:34:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:47.470 11:34:18 -- common/autotest_common.sh@10 -- # set +x 00:10:54.106 11:34:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:54.106 11:34:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:54.106 11:34:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:54.106 11:34:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:54.106 11:34:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:54.106 11:34:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:54.106 11:34:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:54.106 11:34:24 -- nvmf/common.sh@295 -- # net_devs=() 00:10:54.106 11:34:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:54.106 11:34:24 -- nvmf/common.sh@296 -- # e810=() 00:10:54.106 11:34:24 -- nvmf/common.sh@296 -- # local -ga e810 00:10:54.106 11:34:24 -- nvmf/common.sh@297 -- # x722=() 00:10:54.106 11:34:24 -- nvmf/common.sh@297 -- # local -ga x722 00:10:54.106 11:34:24 -- nvmf/common.sh@298 -- # mlx=() 00:10:54.106 11:34:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:54.106 11:34:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:54.106 11:34:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:54.106 11:34:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:54.106 11:34:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:54.106 11:34:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:54.106 11:34:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:54.106 11:34:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:54.106 11:34:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:54.106 11:34:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:54.106 11:34:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:54.106 11:34:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:54.106 11:34:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:54.106 11:34:24 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:54.106 11:34:24 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:54.106 11:34:24 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:54.106 11:34:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:54.106 11:34:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.106 11:34:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:54.106 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:54.106 11:34:24 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:54.106 11:34:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.106 11:34:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:54.106 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:54.106 11:34:24 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:54.106 11:34:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:54.106 11:34:24 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.106 11:34:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.106 11:34:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:54.106 11:34:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.106 11:34:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:54.106 Found net devices under 0000:18:00.0: mlx_0_0 00:10:54.106 11:34:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.106 11:34:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.106 11:34:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.106 11:34:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:54.106 11:34:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.106 11:34:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:54.106 Found net devices under 0000:18:00.1: mlx_0_1 00:10:54.106 11:34:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.106 11:34:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:54.106 11:34:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:54.106 11:34:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:54.106 11:34:24 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:54.106 11:34:24 -- nvmf/common.sh@58 -- # uname 00:10:54.106 11:34:24 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:54.106 11:34:24 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:54.106 11:34:24 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:54.106 11:34:24 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:54.106 11:34:24 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:54.106 11:34:24 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:54.106 11:34:24 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:54.106 11:34:24 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:54.106 11:34:24 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:54.106 11:34:24 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:54.106 11:34:24 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:54.106 11:34:24 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:54.106 11:34:24 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:54.106 11:34:24 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:54.106 11:34:24 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:54.106 11:34:24 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:54.106 11:34:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:54.106 11:34:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.106 11:34:24 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:54.106 11:34:24 -- nvmf/common.sh@105 -- # continue 2 00:10:54.106 11:34:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:54.106 11:34:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.106 11:34:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.106 11:34:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:54.106 11:34:24 -- nvmf/common.sh@105 -- # continue 2 00:10:54.106 11:34:24 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:54.106 11:34:24 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:54.106 11:34:24 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:54.106 11:34:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:54.106 11:34:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:54.106 11:34:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:54.106 11:34:24 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:54.106 11:34:24 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:54.106 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:54.106 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:10:54.106 altname enp24s0f0np0 00:10:54.106 altname ens785f0np0 00:10:54.106 inet 192.168.100.8/24 scope global mlx_0_0 00:10:54.106 valid_lft forever preferred_lft forever 00:10:54.106 11:34:24 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:54.106 11:34:24 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:54.106 11:34:24 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:54.106 11:34:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:54.106 11:34:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:54.106 11:34:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:54.106 11:34:24 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:54.106 11:34:24 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:54.106 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:54.106 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:10:54.106 altname enp24s0f1np1 00:10:54.106 altname ens785f1np1 00:10:54.106 inet 192.168.100.9/24 scope global mlx_0_1 00:10:54.106 valid_lft forever preferred_lft forever 00:10:54.106 11:34:24 -- nvmf/common.sh@411 -- # return 0 00:10:54.106 11:34:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:54.106 11:34:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:54.106 11:34:24 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:54.106 11:34:24 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:54.106 11:34:24 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:54.106 11:34:24 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:54.106 11:34:24 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:54.106 11:34:24 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:54.106 11:34:24 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:54.106 11:34:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:54.106 11:34:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.106 11:34:24 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:54.106 11:34:24 -- nvmf/common.sh@105 -- # continue 2 00:10:54.106 11:34:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:54.106 11:34:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.106 11:34:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:54.106 11:34:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.107 11:34:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:54.107 11:34:24 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:54.107 11:34:24 -- nvmf/common.sh@105 -- # continue 2 00:10:54.107 11:34:24 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:54.107 11:34:24 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:54.107 11:34:24 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:54.107 11:34:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:54.107 11:34:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:54.107 11:34:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:54.107 11:34:24 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:54.107 11:34:24 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:54.107 11:34:24 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:54.107 11:34:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:54.107 11:34:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:54.107 11:34:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:54.107 11:34:24 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:54.107 192.168.100.9' 00:10:54.107 11:34:24 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:54.107 192.168.100.9' 00:10:54.107 11:34:24 -- nvmf/common.sh@446 -- # head -n 1 00:10:54.107 11:34:24 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:54.107 11:34:24 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:54.107 192.168.100.9' 00:10:54.107 11:34:24 -- nvmf/common.sh@447 -- # tail -n +2 00:10:54.107 11:34:24 -- nvmf/common.sh@447 -- # head -n 1 00:10:54.107 11:34:24 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:54.107 11:34:24 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:54.107 11:34:24 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:54.107 11:34:24 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:54.107 11:34:24 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:54.107 11:34:24 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:54.107 11:34:24 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:54.107 11:34:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:54.107 11:34:24 -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:54.107 11:34:24 -- common/autotest_common.sh@10 -- # set +x 00:10:54.107 11:34:24 -- nvmf/common.sh@470 -- # nvmfpid=2963082 00:10:54.107 11:34:24 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:54.107 11:34:24 -- nvmf/common.sh@471 -- # waitforlisten 2963082 00:10:54.107 11:34:24 -- common/autotest_common.sh@827 -- # '[' -z 2963082 ']' 00:10:54.107 11:34:24 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.107 11:34:24 -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:54.107 11:34:24 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.107 11:34:24 -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:54.107 11:34:24 -- common/autotest_common.sh@10 -- # set +x 00:10:54.107 [2024-05-15 11:34:24.336904] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:10:54.107 [2024-05-15 11:34:24.336965] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.107 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.107 [2024-05-15 11:34:24.408581] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.107 [2024-05-15 11:34:24.495327] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.107 [2024-05-15 11:34:24.495373] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.107 [2024-05-15 11:34:24.495383] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.107 [2024-05-15 11:34:24.495391] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.107 [2024-05-15 11:34:24.495398] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.107 [2024-05-15 11:34:24.495421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.674 11:34:25 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:54.674 11:34:25 -- common/autotest_common.sh@860 -- # return 0 00:10:54.674 11:34:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:54.674 11:34:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:54.674 11:34:25 -- common/autotest_common.sh@10 -- # set +x 00:10:54.674 11:34:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.674 11:34:25 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:54.674 11:34:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.674 11:34:25 -- common/autotest_common.sh@10 -- # set +x 00:10:54.674 [2024-05-15 11:34:25.205585] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d170b0/0x1d1b5a0) succeed. 00:10:54.674 [2024-05-15 11:34:25.214927] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d185b0/0x1d5cc30) succeed. 00:10:54.674 11:34:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.674 11:34:25 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:54.674 11:34:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.674 11:34:25 -- common/autotest_common.sh@10 -- # set +x 00:10:54.674 11:34:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.674 11:34:25 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:54.674 11:34:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.674 11:34:25 -- common/autotest_common.sh@10 -- # set +x 00:10:54.674 [2024-05-15 11:34:25.279940] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:54.674 [2024-05-15 11:34:25.280182] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:54.674 11:34:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.674 11:34:25 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:54.674 11:34:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.674 11:34:25 -- common/autotest_common.sh@10 -- # set +x 00:10:54.674 NULL1 00:10:54.674 11:34:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.674 11:34:25 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:54.674 11:34:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.674 11:34:25 -- common/autotest_common.sh@10 -- # set +x 00:10:54.674 11:34:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.674 11:34:25 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:54.674 11:34:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.674 11:34:25 -- common/autotest_common.sh@10 -- # set +x 00:10:54.674 11:34:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.674 11:34:25 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:54.674 [2024-05-15 11:34:25.334280] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:10:54.674 [2024-05-15 11:34:25.334318] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2963185 ] 00:10:54.674 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.933 Attached to nqn.2016-06.io.spdk:cnode1 00:10:54.933 Namespace ID: 1 size: 1GB 00:10:54.933 fused_ordering(0) 00:10:54.933 fused_ordering(1) 00:10:54.933 fused_ordering(2) 00:10:54.933 fused_ordering(3) 00:10:54.933 fused_ordering(4) 00:10:54.933 fused_ordering(5) 00:10:54.933 fused_ordering(6) 00:10:54.933 fused_ordering(7) 00:10:54.933 fused_ordering(8) 00:10:54.933 fused_ordering(9) 00:10:54.933 fused_ordering(10) 00:10:54.933 fused_ordering(11) 00:10:54.933 fused_ordering(12) 00:10:54.933 fused_ordering(13) 00:10:54.933 fused_ordering(14) 00:10:54.933 fused_ordering(15) 00:10:54.933 fused_ordering(16) 00:10:54.933 fused_ordering(17) 00:10:54.933 fused_ordering(18) 00:10:54.933 fused_ordering(19) 00:10:54.933 fused_ordering(20) 00:10:54.933 fused_ordering(21) 00:10:54.933 fused_ordering(22) 00:10:54.933 fused_ordering(23) 00:10:54.933 fused_ordering(24) 00:10:54.933 fused_ordering(25) 00:10:54.933 fused_ordering(26) 00:10:54.933 fused_ordering(27) 00:10:54.933 fused_ordering(28) 00:10:54.933 fused_ordering(29) 00:10:54.933 fused_ordering(30) 00:10:54.933 fused_ordering(31) 00:10:54.933 fused_ordering(32) 00:10:54.933 fused_ordering(33) 00:10:54.933 fused_ordering(34) 00:10:54.933 fused_ordering(35) 00:10:54.933 fused_ordering(36) 00:10:54.933 fused_ordering(37) 00:10:54.933 fused_ordering(38) 00:10:54.933 fused_ordering(39) 00:10:54.933 fused_ordering(40) 00:10:54.933 fused_ordering(41) 00:10:54.933 fused_ordering(42) 00:10:54.933 fused_ordering(43) 00:10:54.933 fused_ordering(44) 00:10:54.933 fused_ordering(45) 00:10:54.933 fused_ordering(46) 00:10:54.933 fused_ordering(47) 00:10:54.933 fused_ordering(48) 00:10:54.933 fused_ordering(49) 00:10:54.933 fused_ordering(50) 00:10:54.933 fused_ordering(51) 00:10:54.933 fused_ordering(52) 00:10:54.933 fused_ordering(53) 00:10:54.933 fused_ordering(54) 00:10:54.933 fused_ordering(55) 00:10:54.933 fused_ordering(56) 00:10:54.933 fused_ordering(57) 00:10:54.933 fused_ordering(58) 00:10:54.933 fused_ordering(59) 00:10:54.933 fused_ordering(60) 00:10:54.933 fused_ordering(61) 00:10:54.933 fused_ordering(62) 00:10:54.933 fused_ordering(63) 00:10:54.933 fused_ordering(64) 00:10:54.933 fused_ordering(65) 00:10:54.933 fused_ordering(66) 00:10:54.933 fused_ordering(67) 00:10:54.933 fused_ordering(68) 00:10:54.933 fused_ordering(69) 00:10:54.933 fused_ordering(70) 00:10:54.933 fused_ordering(71) 00:10:54.933 fused_ordering(72) 00:10:54.933 fused_ordering(73) 00:10:54.933 fused_ordering(74) 00:10:54.933 fused_ordering(75) 00:10:54.933 fused_ordering(76) 00:10:54.933 fused_ordering(77) 00:10:54.933 fused_ordering(78) 00:10:54.933 fused_ordering(79) 00:10:54.933 fused_ordering(80) 00:10:54.933 fused_ordering(81) 00:10:54.934 fused_ordering(82) 00:10:54.934 fused_ordering(83) 00:10:54.934 fused_ordering(84) 00:10:54.934 fused_ordering(85) 00:10:54.934 fused_ordering(86) 00:10:54.934 fused_ordering(87) 00:10:54.934 fused_ordering(88) 00:10:54.934 fused_ordering(89) 00:10:54.934 fused_ordering(90) 00:10:54.934 fused_ordering(91) 00:10:54.934 fused_ordering(92) 00:10:54.934 fused_ordering(93) 00:10:54.934 fused_ordering(94) 00:10:54.934 fused_ordering(95) 00:10:54.934 fused_ordering(96) 00:10:54.934 fused_ordering(97) 00:10:54.934 fused_ordering(98) 00:10:54.934 fused_ordering(99) 00:10:54.934 fused_ordering(100) 00:10:54.934 fused_ordering(101) 00:10:54.934 fused_ordering(102) 00:10:54.934 fused_ordering(103) 00:10:54.934 fused_ordering(104) 00:10:54.934 fused_ordering(105) 00:10:54.934 fused_ordering(106) 00:10:54.934 fused_ordering(107) 00:10:54.934 fused_ordering(108) 00:10:54.934 fused_ordering(109) 00:10:54.934 fused_ordering(110) 00:10:54.934 fused_ordering(111) 00:10:54.934 fused_ordering(112) 00:10:54.934 fused_ordering(113) 00:10:54.934 fused_ordering(114) 00:10:54.934 fused_ordering(115) 00:10:54.934 fused_ordering(116) 00:10:54.934 fused_ordering(117) 00:10:54.934 fused_ordering(118) 00:10:54.934 fused_ordering(119) 00:10:54.934 fused_ordering(120) 00:10:54.934 fused_ordering(121) 00:10:54.934 fused_ordering(122) 00:10:54.934 fused_ordering(123) 00:10:54.934 fused_ordering(124) 00:10:54.934 fused_ordering(125) 00:10:54.934 fused_ordering(126) 00:10:54.934 fused_ordering(127) 00:10:54.934 fused_ordering(128) 00:10:54.934 fused_ordering(129) 00:10:54.934 fused_ordering(130) 00:10:54.934 fused_ordering(131) 00:10:54.934 fused_ordering(132) 00:10:54.934 fused_ordering(133) 00:10:54.934 fused_ordering(134) 00:10:54.934 fused_ordering(135) 00:10:54.934 fused_ordering(136) 00:10:54.934 fused_ordering(137) 00:10:54.934 fused_ordering(138) 00:10:54.934 fused_ordering(139) 00:10:54.934 fused_ordering(140) 00:10:54.934 fused_ordering(141) 00:10:54.934 fused_ordering(142) 00:10:54.934 fused_ordering(143) 00:10:54.934 fused_ordering(144) 00:10:54.934 fused_ordering(145) 00:10:54.934 fused_ordering(146) 00:10:54.934 fused_ordering(147) 00:10:54.934 fused_ordering(148) 00:10:54.934 fused_ordering(149) 00:10:54.934 fused_ordering(150) 00:10:54.934 fused_ordering(151) 00:10:54.934 fused_ordering(152) 00:10:54.934 fused_ordering(153) 00:10:54.934 fused_ordering(154) 00:10:54.934 fused_ordering(155) 00:10:54.934 fused_ordering(156) 00:10:54.934 fused_ordering(157) 00:10:54.934 fused_ordering(158) 00:10:54.934 fused_ordering(159) 00:10:54.934 fused_ordering(160) 00:10:54.934 fused_ordering(161) 00:10:54.934 fused_ordering(162) 00:10:54.934 fused_ordering(163) 00:10:54.934 fused_ordering(164) 00:10:54.934 fused_ordering(165) 00:10:54.934 fused_ordering(166) 00:10:54.934 fused_ordering(167) 00:10:54.934 fused_ordering(168) 00:10:54.934 fused_ordering(169) 00:10:54.934 fused_ordering(170) 00:10:54.934 fused_ordering(171) 00:10:54.934 fused_ordering(172) 00:10:54.934 fused_ordering(173) 00:10:54.934 fused_ordering(174) 00:10:54.934 fused_ordering(175) 00:10:54.934 fused_ordering(176) 00:10:54.934 fused_ordering(177) 00:10:54.934 fused_ordering(178) 00:10:54.934 fused_ordering(179) 00:10:54.934 fused_ordering(180) 00:10:54.934 fused_ordering(181) 00:10:54.934 fused_ordering(182) 00:10:54.934 fused_ordering(183) 00:10:54.934 fused_ordering(184) 00:10:54.934 fused_ordering(185) 00:10:54.934 fused_ordering(186) 00:10:54.934 fused_ordering(187) 00:10:54.934 fused_ordering(188) 00:10:54.934 fused_ordering(189) 00:10:54.934 fused_ordering(190) 00:10:54.934 fused_ordering(191) 00:10:54.934 fused_ordering(192) 00:10:54.934 fused_ordering(193) 00:10:54.934 fused_ordering(194) 00:10:54.934 fused_ordering(195) 00:10:54.934 fused_ordering(196) 00:10:54.934 fused_ordering(197) 00:10:54.934 fused_ordering(198) 00:10:54.934 fused_ordering(199) 00:10:54.934 fused_ordering(200) 00:10:54.934 fused_ordering(201) 00:10:54.934 fused_ordering(202) 00:10:54.934 fused_ordering(203) 00:10:54.934 fused_ordering(204) 00:10:54.934 fused_ordering(205) 00:10:54.934 fused_ordering(206) 00:10:54.934 fused_ordering(207) 00:10:54.934 fused_ordering(208) 00:10:54.934 fused_ordering(209) 00:10:54.934 fused_ordering(210) 00:10:54.934 fused_ordering(211) 00:10:54.934 fused_ordering(212) 00:10:54.934 fused_ordering(213) 00:10:54.934 fused_ordering(214) 00:10:54.934 fused_ordering(215) 00:10:54.934 fused_ordering(216) 00:10:54.934 fused_ordering(217) 00:10:54.934 fused_ordering(218) 00:10:54.934 fused_ordering(219) 00:10:54.934 fused_ordering(220) 00:10:54.934 fused_ordering(221) 00:10:54.934 fused_ordering(222) 00:10:54.934 fused_ordering(223) 00:10:54.934 fused_ordering(224) 00:10:54.934 fused_ordering(225) 00:10:54.934 fused_ordering(226) 00:10:54.934 fused_ordering(227) 00:10:54.934 fused_ordering(228) 00:10:54.934 fused_ordering(229) 00:10:54.934 fused_ordering(230) 00:10:54.934 fused_ordering(231) 00:10:54.934 fused_ordering(232) 00:10:54.934 fused_ordering(233) 00:10:54.934 fused_ordering(234) 00:10:54.934 fused_ordering(235) 00:10:54.934 fused_ordering(236) 00:10:54.934 fused_ordering(237) 00:10:54.934 fused_ordering(238) 00:10:54.934 fused_ordering(239) 00:10:54.934 fused_ordering(240) 00:10:54.934 fused_ordering(241) 00:10:54.934 fused_ordering(242) 00:10:54.934 fused_ordering(243) 00:10:54.934 fused_ordering(244) 00:10:54.934 fused_ordering(245) 00:10:54.934 fused_ordering(246) 00:10:54.934 fused_ordering(247) 00:10:54.934 fused_ordering(248) 00:10:54.934 fused_ordering(249) 00:10:54.934 fused_ordering(250) 00:10:54.934 fused_ordering(251) 00:10:54.934 fused_ordering(252) 00:10:54.934 fused_ordering(253) 00:10:54.934 fused_ordering(254) 00:10:54.934 fused_ordering(255) 00:10:54.934 fused_ordering(256) 00:10:54.934 fused_ordering(257) 00:10:54.934 fused_ordering(258) 00:10:54.934 fused_ordering(259) 00:10:54.934 fused_ordering(260) 00:10:54.934 fused_ordering(261) 00:10:54.934 fused_ordering(262) 00:10:54.934 fused_ordering(263) 00:10:54.934 fused_ordering(264) 00:10:54.934 fused_ordering(265) 00:10:54.934 fused_ordering(266) 00:10:54.934 fused_ordering(267) 00:10:54.934 fused_ordering(268) 00:10:54.934 fused_ordering(269) 00:10:54.934 fused_ordering(270) 00:10:54.934 fused_ordering(271) 00:10:54.934 fused_ordering(272) 00:10:54.934 fused_ordering(273) 00:10:54.934 fused_ordering(274) 00:10:54.934 fused_ordering(275) 00:10:54.934 fused_ordering(276) 00:10:54.934 fused_ordering(277) 00:10:54.934 fused_ordering(278) 00:10:54.934 fused_ordering(279) 00:10:54.934 fused_ordering(280) 00:10:54.934 fused_ordering(281) 00:10:54.934 fused_ordering(282) 00:10:54.934 fused_ordering(283) 00:10:54.934 fused_ordering(284) 00:10:54.934 fused_ordering(285) 00:10:54.934 fused_ordering(286) 00:10:54.934 fused_ordering(287) 00:10:54.934 fused_ordering(288) 00:10:54.934 fused_ordering(289) 00:10:54.934 fused_ordering(290) 00:10:54.934 fused_ordering(291) 00:10:54.934 fused_ordering(292) 00:10:54.934 fused_ordering(293) 00:10:54.934 fused_ordering(294) 00:10:54.934 fused_ordering(295) 00:10:54.934 fused_ordering(296) 00:10:54.934 fused_ordering(297) 00:10:54.934 fused_ordering(298) 00:10:54.934 fused_ordering(299) 00:10:54.934 fused_ordering(300) 00:10:54.934 fused_ordering(301) 00:10:54.934 fused_ordering(302) 00:10:54.934 fused_ordering(303) 00:10:54.934 fused_ordering(304) 00:10:54.934 fused_ordering(305) 00:10:54.934 fused_ordering(306) 00:10:54.934 fused_ordering(307) 00:10:54.934 fused_ordering(308) 00:10:54.934 fused_ordering(309) 00:10:54.934 fused_ordering(310) 00:10:54.934 fused_ordering(311) 00:10:54.934 fused_ordering(312) 00:10:54.934 fused_ordering(313) 00:10:54.934 fused_ordering(314) 00:10:54.934 fused_ordering(315) 00:10:54.934 fused_ordering(316) 00:10:54.934 fused_ordering(317) 00:10:54.934 fused_ordering(318) 00:10:54.934 fused_ordering(319) 00:10:54.934 fused_ordering(320) 00:10:54.934 fused_ordering(321) 00:10:54.934 fused_ordering(322) 00:10:54.934 fused_ordering(323) 00:10:54.934 fused_ordering(324) 00:10:54.934 fused_ordering(325) 00:10:54.934 fused_ordering(326) 00:10:54.934 fused_ordering(327) 00:10:54.934 fused_ordering(328) 00:10:54.934 fused_ordering(329) 00:10:54.934 fused_ordering(330) 00:10:54.934 fused_ordering(331) 00:10:54.934 fused_ordering(332) 00:10:54.934 fused_ordering(333) 00:10:54.934 fused_ordering(334) 00:10:54.934 fused_ordering(335) 00:10:54.934 fused_ordering(336) 00:10:54.934 fused_ordering(337) 00:10:54.934 fused_ordering(338) 00:10:54.934 fused_ordering(339) 00:10:54.934 fused_ordering(340) 00:10:54.934 fused_ordering(341) 00:10:54.934 fused_ordering(342) 00:10:54.934 fused_ordering(343) 00:10:54.934 fused_ordering(344) 00:10:54.934 fused_ordering(345) 00:10:54.934 fused_ordering(346) 00:10:54.934 fused_ordering(347) 00:10:54.934 fused_ordering(348) 00:10:54.934 fused_ordering(349) 00:10:54.934 fused_ordering(350) 00:10:54.934 fused_ordering(351) 00:10:54.934 fused_ordering(352) 00:10:54.934 fused_ordering(353) 00:10:54.934 fused_ordering(354) 00:10:54.934 fused_ordering(355) 00:10:54.934 fused_ordering(356) 00:10:54.935 fused_ordering(357) 00:10:54.935 fused_ordering(358) 00:10:54.935 fused_ordering(359) 00:10:54.935 fused_ordering(360) 00:10:54.935 fused_ordering(361) 00:10:54.935 fused_ordering(362) 00:10:54.935 fused_ordering(363) 00:10:54.935 fused_ordering(364) 00:10:54.935 fused_ordering(365) 00:10:54.935 fused_ordering(366) 00:10:54.935 fused_ordering(367) 00:10:54.935 fused_ordering(368) 00:10:54.935 fused_ordering(369) 00:10:54.935 fused_ordering(370) 00:10:54.935 fused_ordering(371) 00:10:54.935 fused_ordering(372) 00:10:54.935 fused_ordering(373) 00:10:54.935 fused_ordering(374) 00:10:54.935 fused_ordering(375) 00:10:54.935 fused_ordering(376) 00:10:54.935 fused_ordering(377) 00:10:54.935 fused_ordering(378) 00:10:54.935 fused_ordering(379) 00:10:54.935 fused_ordering(380) 00:10:54.935 fused_ordering(381) 00:10:54.935 fused_ordering(382) 00:10:54.935 fused_ordering(383) 00:10:54.935 fused_ordering(384) 00:10:54.935 fused_ordering(385) 00:10:54.935 fused_ordering(386) 00:10:54.935 fused_ordering(387) 00:10:54.935 fused_ordering(388) 00:10:54.935 fused_ordering(389) 00:10:54.935 fused_ordering(390) 00:10:54.935 fused_ordering(391) 00:10:54.935 fused_ordering(392) 00:10:54.935 fused_ordering(393) 00:10:54.935 fused_ordering(394) 00:10:54.935 fused_ordering(395) 00:10:54.935 fused_ordering(396) 00:10:54.935 fused_ordering(397) 00:10:54.935 fused_ordering(398) 00:10:54.935 fused_ordering(399) 00:10:54.935 fused_ordering(400) 00:10:54.935 fused_ordering(401) 00:10:54.935 fused_ordering(402) 00:10:54.935 fused_ordering(403) 00:10:54.935 fused_ordering(404) 00:10:54.935 fused_ordering(405) 00:10:54.935 fused_ordering(406) 00:10:54.935 fused_ordering(407) 00:10:54.935 fused_ordering(408) 00:10:54.935 fused_ordering(409) 00:10:54.935 fused_ordering(410) 00:10:54.935 fused_ordering(411) 00:10:54.935 fused_ordering(412) 00:10:54.935 fused_ordering(413) 00:10:54.935 fused_ordering(414) 00:10:54.935 fused_ordering(415) 00:10:54.935 fused_ordering(416) 00:10:54.935 fused_ordering(417) 00:10:54.935 fused_ordering(418) 00:10:54.935 fused_ordering(419) 00:10:54.935 fused_ordering(420) 00:10:54.935 fused_ordering(421) 00:10:54.935 fused_ordering(422) 00:10:54.935 fused_ordering(423) 00:10:54.935 fused_ordering(424) 00:10:54.935 fused_ordering(425) 00:10:54.935 fused_ordering(426) 00:10:54.935 fused_ordering(427) 00:10:54.935 fused_ordering(428) 00:10:54.935 fused_ordering(429) 00:10:54.935 fused_ordering(430) 00:10:54.935 fused_ordering(431) 00:10:54.935 fused_ordering(432) 00:10:54.935 fused_ordering(433) 00:10:54.935 fused_ordering(434) 00:10:54.935 fused_ordering(435) 00:10:54.935 fused_ordering(436) 00:10:54.935 fused_ordering(437) 00:10:54.935 fused_ordering(438) 00:10:54.935 fused_ordering(439) 00:10:54.935 fused_ordering(440) 00:10:54.935 fused_ordering(441) 00:10:54.935 fused_ordering(442) 00:10:54.935 fused_ordering(443) 00:10:54.935 fused_ordering(444) 00:10:54.935 fused_ordering(445) 00:10:54.935 fused_ordering(446) 00:10:54.935 fused_ordering(447) 00:10:54.935 fused_ordering(448) 00:10:54.935 fused_ordering(449) 00:10:54.935 fused_ordering(450) 00:10:54.935 fused_ordering(451) 00:10:54.935 fused_ordering(452) 00:10:54.935 fused_ordering(453) 00:10:54.935 fused_ordering(454) 00:10:54.935 fused_ordering(455) 00:10:54.935 fused_ordering(456) 00:10:54.935 fused_ordering(457) 00:10:54.935 fused_ordering(458) 00:10:54.935 fused_ordering(459) 00:10:54.935 fused_ordering(460) 00:10:54.935 fused_ordering(461) 00:10:54.935 fused_ordering(462) 00:10:54.935 fused_ordering(463) 00:10:54.935 fused_ordering(464) 00:10:54.935 fused_ordering(465) 00:10:54.935 fused_ordering(466) 00:10:54.935 fused_ordering(467) 00:10:54.935 fused_ordering(468) 00:10:54.935 fused_ordering(469) 00:10:54.935 fused_ordering(470) 00:10:54.935 fused_ordering(471) 00:10:54.935 fused_ordering(472) 00:10:54.935 fused_ordering(473) 00:10:54.935 fused_ordering(474) 00:10:54.935 fused_ordering(475) 00:10:54.935 fused_ordering(476) 00:10:54.935 fused_ordering(477) 00:10:54.935 fused_ordering(478) 00:10:54.935 fused_ordering(479) 00:10:54.935 fused_ordering(480) 00:10:54.935 fused_ordering(481) 00:10:54.935 fused_ordering(482) 00:10:54.935 fused_ordering(483) 00:10:54.935 fused_ordering(484) 00:10:54.935 fused_ordering(485) 00:10:54.935 fused_ordering(486) 00:10:54.935 fused_ordering(487) 00:10:54.935 fused_ordering(488) 00:10:54.935 fused_ordering(489) 00:10:54.935 fused_ordering(490) 00:10:54.935 fused_ordering(491) 00:10:54.935 fused_ordering(492) 00:10:54.935 fused_ordering(493) 00:10:54.935 fused_ordering(494) 00:10:54.935 fused_ordering(495) 00:10:54.935 fused_ordering(496) 00:10:54.935 fused_ordering(497) 00:10:54.935 fused_ordering(498) 00:10:54.935 fused_ordering(499) 00:10:54.935 fused_ordering(500) 00:10:54.935 fused_ordering(501) 00:10:54.935 fused_ordering(502) 00:10:54.935 fused_ordering(503) 00:10:54.935 fused_ordering(504) 00:10:54.935 fused_ordering(505) 00:10:54.935 fused_ordering(506) 00:10:54.935 fused_ordering(507) 00:10:54.935 fused_ordering(508) 00:10:54.935 fused_ordering(509) 00:10:54.935 fused_ordering(510) 00:10:54.935 fused_ordering(511) 00:10:54.935 fused_ordering(512) 00:10:54.935 fused_ordering(513) 00:10:54.935 fused_ordering(514) 00:10:54.935 fused_ordering(515) 00:10:54.935 fused_ordering(516) 00:10:54.935 fused_ordering(517) 00:10:54.935 fused_ordering(518) 00:10:54.935 fused_ordering(519) 00:10:54.935 fused_ordering(520) 00:10:54.935 fused_ordering(521) 00:10:54.935 fused_ordering(522) 00:10:54.935 fused_ordering(523) 00:10:54.935 fused_ordering(524) 00:10:54.935 fused_ordering(525) 00:10:54.935 fused_ordering(526) 00:10:54.935 fused_ordering(527) 00:10:54.935 fused_ordering(528) 00:10:54.935 fused_ordering(529) 00:10:54.935 fused_ordering(530) 00:10:54.935 fused_ordering(531) 00:10:54.935 fused_ordering(532) 00:10:54.935 fused_ordering(533) 00:10:54.935 fused_ordering(534) 00:10:54.935 fused_ordering(535) 00:10:54.935 fused_ordering(536) 00:10:54.935 fused_ordering(537) 00:10:54.935 fused_ordering(538) 00:10:54.935 fused_ordering(539) 00:10:54.935 fused_ordering(540) 00:10:54.935 fused_ordering(541) 00:10:54.935 fused_ordering(542) 00:10:54.935 fused_ordering(543) 00:10:54.935 fused_ordering(544) 00:10:54.935 fused_ordering(545) 00:10:54.935 fused_ordering(546) 00:10:54.935 fused_ordering(547) 00:10:54.935 fused_ordering(548) 00:10:54.935 fused_ordering(549) 00:10:54.935 fused_ordering(550) 00:10:54.935 fused_ordering(551) 00:10:54.935 fused_ordering(552) 00:10:54.935 fused_ordering(553) 00:10:54.935 fused_ordering(554) 00:10:54.935 fused_ordering(555) 00:10:54.935 fused_ordering(556) 00:10:54.935 fused_ordering(557) 00:10:54.935 fused_ordering(558) 00:10:54.935 fused_ordering(559) 00:10:54.935 fused_ordering(560) 00:10:54.935 fused_ordering(561) 00:10:54.935 fused_ordering(562) 00:10:54.935 fused_ordering(563) 00:10:54.935 fused_ordering(564) 00:10:54.935 fused_ordering(565) 00:10:54.935 fused_ordering(566) 00:10:54.935 fused_ordering(567) 00:10:54.935 fused_ordering(568) 00:10:54.935 fused_ordering(569) 00:10:54.935 fused_ordering(570) 00:10:54.935 fused_ordering(571) 00:10:54.935 fused_ordering(572) 00:10:54.935 fused_ordering(573) 00:10:54.935 fused_ordering(574) 00:10:54.935 fused_ordering(575) 00:10:54.935 fused_ordering(576) 00:10:54.935 fused_ordering(577) 00:10:54.935 fused_ordering(578) 00:10:54.935 fused_ordering(579) 00:10:54.935 fused_ordering(580) 00:10:54.935 fused_ordering(581) 00:10:54.935 fused_ordering(582) 00:10:54.935 fused_ordering(583) 00:10:54.935 fused_ordering(584) 00:10:54.935 fused_ordering(585) 00:10:54.935 fused_ordering(586) 00:10:54.935 fused_ordering(587) 00:10:54.935 fused_ordering(588) 00:10:54.935 fused_ordering(589) 00:10:54.935 fused_ordering(590) 00:10:54.935 fused_ordering(591) 00:10:54.935 fused_ordering(592) 00:10:54.935 fused_ordering(593) 00:10:54.935 fused_ordering(594) 00:10:54.935 fused_ordering(595) 00:10:54.935 fused_ordering(596) 00:10:54.935 fused_ordering(597) 00:10:54.935 fused_ordering(598) 00:10:54.935 fused_ordering(599) 00:10:54.935 fused_ordering(600) 00:10:54.935 fused_ordering(601) 00:10:54.935 fused_ordering(602) 00:10:54.935 fused_ordering(603) 00:10:54.935 fused_ordering(604) 00:10:54.935 fused_ordering(605) 00:10:54.935 fused_ordering(606) 00:10:54.935 fused_ordering(607) 00:10:54.935 fused_ordering(608) 00:10:54.935 fused_ordering(609) 00:10:54.935 fused_ordering(610) 00:10:54.935 fused_ordering(611) 00:10:54.935 fused_ordering(612) 00:10:54.935 fused_ordering(613) 00:10:54.935 fused_ordering(614) 00:10:54.935 fused_ordering(615) 00:10:55.194 fused_ordering(616) 00:10:55.194 fused_ordering(617) 00:10:55.194 fused_ordering(618) 00:10:55.194 fused_ordering(619) 00:10:55.194 fused_ordering(620) 00:10:55.194 fused_ordering(621) 00:10:55.194 fused_ordering(622) 00:10:55.194 fused_ordering(623) 00:10:55.194 fused_ordering(624) 00:10:55.194 fused_ordering(625) 00:10:55.194 fused_ordering(626) 00:10:55.194 fused_ordering(627) 00:10:55.194 fused_ordering(628) 00:10:55.194 fused_ordering(629) 00:10:55.194 fused_ordering(630) 00:10:55.194 fused_ordering(631) 00:10:55.194 fused_ordering(632) 00:10:55.194 fused_ordering(633) 00:10:55.194 fused_ordering(634) 00:10:55.194 fused_ordering(635) 00:10:55.194 fused_ordering(636) 00:10:55.194 fused_ordering(637) 00:10:55.194 fused_ordering(638) 00:10:55.194 fused_ordering(639) 00:10:55.194 fused_ordering(640) 00:10:55.194 fused_ordering(641) 00:10:55.194 fused_ordering(642) 00:10:55.194 fused_ordering(643) 00:10:55.194 fused_ordering(644) 00:10:55.194 fused_ordering(645) 00:10:55.195 fused_ordering(646) 00:10:55.195 fused_ordering(647) 00:10:55.195 fused_ordering(648) 00:10:55.195 fused_ordering(649) 00:10:55.195 fused_ordering(650) 00:10:55.195 fused_ordering(651) 00:10:55.195 fused_ordering(652) 00:10:55.195 fused_ordering(653) 00:10:55.195 fused_ordering(654) 00:10:55.195 fused_ordering(655) 00:10:55.195 fused_ordering(656) 00:10:55.195 fused_ordering(657) 00:10:55.195 fused_ordering(658) 00:10:55.195 fused_ordering(659) 00:10:55.195 fused_ordering(660) 00:10:55.195 fused_ordering(661) 00:10:55.195 fused_ordering(662) 00:10:55.195 fused_ordering(663) 00:10:55.195 fused_ordering(664) 00:10:55.195 fused_ordering(665) 00:10:55.195 fused_ordering(666) 00:10:55.195 fused_ordering(667) 00:10:55.195 fused_ordering(668) 00:10:55.195 fused_ordering(669) 00:10:55.195 fused_ordering(670) 00:10:55.195 fused_ordering(671) 00:10:55.195 fused_ordering(672) 00:10:55.195 fused_ordering(673) 00:10:55.195 fused_ordering(674) 00:10:55.195 fused_ordering(675) 00:10:55.195 fused_ordering(676) 00:10:55.195 fused_ordering(677) 00:10:55.195 fused_ordering(678) 00:10:55.195 fused_ordering(679) 00:10:55.195 fused_ordering(680) 00:10:55.195 fused_ordering(681) 00:10:55.195 fused_ordering(682) 00:10:55.195 fused_ordering(683) 00:10:55.195 fused_ordering(684) 00:10:55.195 fused_ordering(685) 00:10:55.195 fused_ordering(686) 00:10:55.195 fused_ordering(687) 00:10:55.195 fused_ordering(688) 00:10:55.195 fused_ordering(689) 00:10:55.195 fused_ordering(690) 00:10:55.195 fused_ordering(691) 00:10:55.195 fused_ordering(692) 00:10:55.195 fused_ordering(693) 00:10:55.195 fused_ordering(694) 00:10:55.195 fused_ordering(695) 00:10:55.195 fused_ordering(696) 00:10:55.195 fused_ordering(697) 00:10:55.195 fused_ordering(698) 00:10:55.195 fused_ordering(699) 00:10:55.195 fused_ordering(700) 00:10:55.195 fused_ordering(701) 00:10:55.195 fused_ordering(702) 00:10:55.195 fused_ordering(703) 00:10:55.195 fused_ordering(704) 00:10:55.195 fused_ordering(705) 00:10:55.195 fused_ordering(706) 00:10:55.195 fused_ordering(707) 00:10:55.195 fused_ordering(708) 00:10:55.195 fused_ordering(709) 00:10:55.195 fused_ordering(710) 00:10:55.195 fused_ordering(711) 00:10:55.195 fused_ordering(712) 00:10:55.195 fused_ordering(713) 00:10:55.195 fused_ordering(714) 00:10:55.195 fused_ordering(715) 00:10:55.195 fused_ordering(716) 00:10:55.195 fused_ordering(717) 00:10:55.195 fused_ordering(718) 00:10:55.195 fused_ordering(719) 00:10:55.195 fused_ordering(720) 00:10:55.195 fused_ordering(721) 00:10:55.195 fused_ordering(722) 00:10:55.195 fused_ordering(723) 00:10:55.195 fused_ordering(724) 00:10:55.195 fused_ordering(725) 00:10:55.195 fused_ordering(726) 00:10:55.195 fused_ordering(727) 00:10:55.195 fused_ordering(728) 00:10:55.195 fused_ordering(729) 00:10:55.195 fused_ordering(730) 00:10:55.195 fused_ordering(731) 00:10:55.195 fused_ordering(732) 00:10:55.195 fused_ordering(733) 00:10:55.195 fused_ordering(734) 00:10:55.195 fused_ordering(735) 00:10:55.195 fused_ordering(736) 00:10:55.195 fused_ordering(737) 00:10:55.195 fused_ordering(738) 00:10:55.195 fused_ordering(739) 00:10:55.195 fused_ordering(740) 00:10:55.195 fused_ordering(741) 00:10:55.195 fused_ordering(742) 00:10:55.195 fused_ordering(743) 00:10:55.195 fused_ordering(744) 00:10:55.195 fused_ordering(745) 00:10:55.195 fused_ordering(746) 00:10:55.195 fused_ordering(747) 00:10:55.195 fused_ordering(748) 00:10:55.195 fused_ordering(749) 00:10:55.195 fused_ordering(750) 00:10:55.195 fused_ordering(751) 00:10:55.195 fused_ordering(752) 00:10:55.195 fused_ordering(753) 00:10:55.195 fused_ordering(754) 00:10:55.195 fused_ordering(755) 00:10:55.195 fused_ordering(756) 00:10:55.195 fused_ordering(757) 00:10:55.195 fused_ordering(758) 00:10:55.195 fused_ordering(759) 00:10:55.195 fused_ordering(760) 00:10:55.195 fused_ordering(761) 00:10:55.195 fused_ordering(762) 00:10:55.195 fused_ordering(763) 00:10:55.195 fused_ordering(764) 00:10:55.195 fused_ordering(765) 00:10:55.195 fused_ordering(766) 00:10:55.195 fused_ordering(767) 00:10:55.195 fused_ordering(768) 00:10:55.195 fused_ordering(769) 00:10:55.195 fused_ordering(770) 00:10:55.195 fused_ordering(771) 00:10:55.195 fused_ordering(772) 00:10:55.195 fused_ordering(773) 00:10:55.195 fused_ordering(774) 00:10:55.195 fused_ordering(775) 00:10:55.195 fused_ordering(776) 00:10:55.195 fused_ordering(777) 00:10:55.195 fused_ordering(778) 00:10:55.195 fused_ordering(779) 00:10:55.195 fused_ordering(780) 00:10:55.195 fused_ordering(781) 00:10:55.195 fused_ordering(782) 00:10:55.195 fused_ordering(783) 00:10:55.195 fused_ordering(784) 00:10:55.195 fused_ordering(785) 00:10:55.195 fused_ordering(786) 00:10:55.195 fused_ordering(787) 00:10:55.195 fused_ordering(788) 00:10:55.195 fused_ordering(789) 00:10:55.195 fused_ordering(790) 00:10:55.195 fused_ordering(791) 00:10:55.195 fused_ordering(792) 00:10:55.195 fused_ordering(793) 00:10:55.195 fused_ordering(794) 00:10:55.195 fused_ordering(795) 00:10:55.195 fused_ordering(796) 00:10:55.195 fused_ordering(797) 00:10:55.195 fused_ordering(798) 00:10:55.195 fused_ordering(799) 00:10:55.195 fused_ordering(800) 00:10:55.195 fused_ordering(801) 00:10:55.195 fused_ordering(802) 00:10:55.195 fused_ordering(803) 00:10:55.195 fused_ordering(804) 00:10:55.195 fused_ordering(805) 00:10:55.195 fused_ordering(806) 00:10:55.195 fused_ordering(807) 00:10:55.195 fused_ordering(808) 00:10:55.195 fused_ordering(809) 00:10:55.195 fused_ordering(810) 00:10:55.195 fused_ordering(811) 00:10:55.195 fused_ordering(812) 00:10:55.195 fused_ordering(813) 00:10:55.195 fused_ordering(814) 00:10:55.195 fused_ordering(815) 00:10:55.195 fused_ordering(816) 00:10:55.195 fused_ordering(817) 00:10:55.195 fused_ordering(818) 00:10:55.195 fused_ordering(819) 00:10:55.195 fused_ordering(820) 00:10:55.454 fused_ordering(821) 00:10:55.454 fused_ordering(822) 00:10:55.454 fused_ordering(823) 00:10:55.454 fused_ordering(824) 00:10:55.454 fused_ordering(825) 00:10:55.454 fused_ordering(826) 00:10:55.454 fused_ordering(827) 00:10:55.454 fused_ordering(828) 00:10:55.454 fused_ordering(829) 00:10:55.454 fused_ordering(830) 00:10:55.454 fused_ordering(831) 00:10:55.454 fused_ordering(832) 00:10:55.454 fused_ordering(833) 00:10:55.454 fused_ordering(834) 00:10:55.454 fused_ordering(835) 00:10:55.454 fused_ordering(836) 00:10:55.454 fused_ordering(837) 00:10:55.454 fused_ordering(838) 00:10:55.454 fused_ordering(839) 00:10:55.454 fused_ordering(840) 00:10:55.454 fused_ordering(841) 00:10:55.454 fused_ordering(842) 00:10:55.454 fused_ordering(843) 00:10:55.454 fused_ordering(844) 00:10:55.454 fused_ordering(845) 00:10:55.454 fused_ordering(846) 00:10:55.454 fused_ordering(847) 00:10:55.454 fused_ordering(848) 00:10:55.454 fused_ordering(849) 00:10:55.454 fused_ordering(850) 00:10:55.454 fused_ordering(851) 00:10:55.454 fused_ordering(852) 00:10:55.454 fused_ordering(853) 00:10:55.454 fused_ordering(854) 00:10:55.454 fused_ordering(855) 00:10:55.454 fused_ordering(856) 00:10:55.454 fused_ordering(857) 00:10:55.454 fused_ordering(858) 00:10:55.454 fused_ordering(859) 00:10:55.454 fused_ordering(860) 00:10:55.454 fused_ordering(861) 00:10:55.454 fused_ordering(862) 00:10:55.454 fused_ordering(863) 00:10:55.454 fused_ordering(864) 00:10:55.454 fused_ordering(865) 00:10:55.454 fused_ordering(866) 00:10:55.454 fused_ordering(867) 00:10:55.454 fused_ordering(868) 00:10:55.454 fused_ordering(869) 00:10:55.454 fused_ordering(870) 00:10:55.454 fused_ordering(871) 00:10:55.454 fused_ordering(872) 00:10:55.454 fused_ordering(873) 00:10:55.454 fused_ordering(874) 00:10:55.454 fused_ordering(875) 00:10:55.454 fused_ordering(876) 00:10:55.454 fused_ordering(877) 00:10:55.454 fused_ordering(878) 00:10:55.454 fused_ordering(879) 00:10:55.454 fused_ordering(880) 00:10:55.454 fused_ordering(881) 00:10:55.454 fused_ordering(882) 00:10:55.454 fused_ordering(883) 00:10:55.454 fused_ordering(884) 00:10:55.454 fused_ordering(885) 00:10:55.454 fused_ordering(886) 00:10:55.454 fused_ordering(887) 00:10:55.454 fused_ordering(888) 00:10:55.454 fused_ordering(889) 00:10:55.454 fused_ordering(890) 00:10:55.454 fused_ordering(891) 00:10:55.454 fused_ordering(892) 00:10:55.454 fused_ordering(893) 00:10:55.454 fused_ordering(894) 00:10:55.454 fused_ordering(895) 00:10:55.454 fused_ordering(896) 00:10:55.454 fused_ordering(897) 00:10:55.454 fused_ordering(898) 00:10:55.454 fused_ordering(899) 00:10:55.454 fused_ordering(900) 00:10:55.454 fused_ordering(901) 00:10:55.454 fused_ordering(902) 00:10:55.454 fused_ordering(903) 00:10:55.454 fused_ordering(904) 00:10:55.454 fused_ordering(905) 00:10:55.454 fused_ordering(906) 00:10:55.454 fused_ordering(907) 00:10:55.454 fused_ordering(908) 00:10:55.454 fused_ordering(909) 00:10:55.454 fused_ordering(910) 00:10:55.454 fused_ordering(911) 00:10:55.454 fused_ordering(912) 00:10:55.454 fused_ordering(913) 00:10:55.454 fused_ordering(914) 00:10:55.454 fused_ordering(915) 00:10:55.454 fused_ordering(916) 00:10:55.454 fused_ordering(917) 00:10:55.454 fused_ordering(918) 00:10:55.454 fused_ordering(919) 00:10:55.454 fused_ordering(920) 00:10:55.454 fused_ordering(921) 00:10:55.454 fused_ordering(922) 00:10:55.454 fused_ordering(923) 00:10:55.454 fused_ordering(924) 00:10:55.454 fused_ordering(925) 00:10:55.454 fused_ordering(926) 00:10:55.454 fused_ordering(927) 00:10:55.454 fused_ordering(928) 00:10:55.454 fused_ordering(929) 00:10:55.454 fused_ordering(930) 00:10:55.454 fused_ordering(931) 00:10:55.454 fused_ordering(932) 00:10:55.454 fused_ordering(933) 00:10:55.454 fused_ordering(934) 00:10:55.454 fused_ordering(935) 00:10:55.454 fused_ordering(936) 00:10:55.454 fused_ordering(937) 00:10:55.454 fused_ordering(938) 00:10:55.454 fused_ordering(939) 00:10:55.454 fused_ordering(940) 00:10:55.454 fused_ordering(941) 00:10:55.454 fused_ordering(942) 00:10:55.454 fused_ordering(943) 00:10:55.454 fused_ordering(944) 00:10:55.454 fused_ordering(945) 00:10:55.454 fused_ordering(946) 00:10:55.454 fused_ordering(947) 00:10:55.454 fused_ordering(948) 00:10:55.454 fused_ordering(949) 00:10:55.454 fused_ordering(950) 00:10:55.454 fused_ordering(951) 00:10:55.454 fused_ordering(952) 00:10:55.454 fused_ordering(953) 00:10:55.454 fused_ordering(954) 00:10:55.454 fused_ordering(955) 00:10:55.454 fused_ordering(956) 00:10:55.454 fused_ordering(957) 00:10:55.454 fused_ordering(958) 00:10:55.454 fused_ordering(959) 00:10:55.454 fused_ordering(960) 00:10:55.454 fused_ordering(961) 00:10:55.454 fused_ordering(962) 00:10:55.454 fused_ordering(963) 00:10:55.454 fused_ordering(964) 00:10:55.454 fused_ordering(965) 00:10:55.454 fused_ordering(966) 00:10:55.454 fused_ordering(967) 00:10:55.454 fused_ordering(968) 00:10:55.454 fused_ordering(969) 00:10:55.454 fused_ordering(970) 00:10:55.454 fused_ordering(971) 00:10:55.454 fused_ordering(972) 00:10:55.454 fused_ordering(973) 00:10:55.454 fused_ordering(974) 00:10:55.454 fused_ordering(975) 00:10:55.454 fused_ordering(976) 00:10:55.454 fused_ordering(977) 00:10:55.454 fused_ordering(978) 00:10:55.454 fused_ordering(979) 00:10:55.454 fused_ordering(980) 00:10:55.454 fused_ordering(981) 00:10:55.454 fused_ordering(982) 00:10:55.454 fused_ordering(983) 00:10:55.454 fused_ordering(984) 00:10:55.454 fused_ordering(985) 00:10:55.454 fused_ordering(986) 00:10:55.454 fused_ordering(987) 00:10:55.454 fused_ordering(988) 00:10:55.454 fused_ordering(989) 00:10:55.454 fused_ordering(990) 00:10:55.454 fused_ordering(991) 00:10:55.454 fused_ordering(992) 00:10:55.454 fused_ordering(993) 00:10:55.454 fused_ordering(994) 00:10:55.454 fused_ordering(995) 00:10:55.454 fused_ordering(996) 00:10:55.454 fused_ordering(997) 00:10:55.454 fused_ordering(998) 00:10:55.454 fused_ordering(999) 00:10:55.454 fused_ordering(1000) 00:10:55.454 fused_ordering(1001) 00:10:55.454 fused_ordering(1002) 00:10:55.454 fused_ordering(1003) 00:10:55.454 fused_ordering(1004) 00:10:55.454 fused_ordering(1005) 00:10:55.454 fused_ordering(1006) 00:10:55.454 fused_ordering(1007) 00:10:55.454 fused_ordering(1008) 00:10:55.454 fused_ordering(1009) 00:10:55.454 fused_ordering(1010) 00:10:55.454 fused_ordering(1011) 00:10:55.454 fused_ordering(1012) 00:10:55.454 fused_ordering(1013) 00:10:55.454 fused_ordering(1014) 00:10:55.454 fused_ordering(1015) 00:10:55.454 fused_ordering(1016) 00:10:55.454 fused_ordering(1017) 00:10:55.454 fused_ordering(1018) 00:10:55.454 fused_ordering(1019) 00:10:55.454 fused_ordering(1020) 00:10:55.454 fused_ordering(1021) 00:10:55.454 fused_ordering(1022) 00:10:55.454 fused_ordering(1023) 00:10:55.454 11:34:26 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:55.454 11:34:26 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:55.454 11:34:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:55.455 11:34:26 -- nvmf/common.sh@117 -- # sync 00:10:55.455 11:34:26 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:55.455 11:34:26 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:55.455 11:34:26 -- nvmf/common.sh@120 -- # set +e 00:10:55.455 11:34:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:55.455 11:34:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:55.455 rmmod nvme_rdma 00:10:55.455 rmmod nvme_fabrics 00:10:55.455 11:34:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:55.455 11:34:26 -- nvmf/common.sh@124 -- # set -e 00:10:55.455 11:34:26 -- nvmf/common.sh@125 -- # return 0 00:10:55.455 11:34:26 -- nvmf/common.sh@478 -- # '[' -n 2963082 ']' 00:10:55.455 11:34:26 -- nvmf/common.sh@479 -- # killprocess 2963082 00:10:55.455 11:34:26 -- common/autotest_common.sh@946 -- # '[' -z 2963082 ']' 00:10:55.455 11:34:26 -- common/autotest_common.sh@950 -- # kill -0 2963082 00:10:55.455 11:34:26 -- common/autotest_common.sh@951 -- # uname 00:10:55.455 11:34:26 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:55.455 11:34:26 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2963082 00:10:55.455 11:34:26 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:55.455 11:34:26 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:55.455 11:34:26 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2963082' 00:10:55.455 killing process with pid 2963082 00:10:55.455 11:34:26 -- common/autotest_common.sh@965 -- # kill 2963082 00:10:55.455 [2024-05-15 11:34:26.101298] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:55.455 11:34:26 -- common/autotest_common.sh@970 -- # wait 2963082 00:10:55.455 [2024-05-15 11:34:26.141973] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:10:55.713 11:34:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:55.713 11:34:26 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:55.713 00:10:55.713 real 0m8.423s 00:10:55.713 user 0m4.532s 00:10:55.713 sys 0m5.180s 00:10:55.713 11:34:26 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:55.713 11:34:26 -- common/autotest_common.sh@10 -- # set +x 00:10:55.713 ************************************ 00:10:55.713 END TEST nvmf_fused_ordering 00:10:55.713 ************************************ 00:10:55.713 11:34:26 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:10:55.713 11:34:26 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:55.713 11:34:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:55.713 11:34:26 -- common/autotest_common.sh@10 -- # set +x 00:10:55.713 ************************************ 00:10:55.713 START TEST nvmf_delete_subsystem 00:10:55.713 ************************************ 00:10:55.713 11:34:26 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:10:55.972 * Looking for test storage... 00:10:55.972 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:55.972 11:34:26 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.972 11:34:26 -- nvmf/common.sh@7 -- # uname -s 00:10:55.972 11:34:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.972 11:34:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.972 11:34:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.972 11:34:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.972 11:34:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.972 11:34:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.972 11:34:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.972 11:34:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.972 11:34:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.972 11:34:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.972 11:34:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:10:55.972 11:34:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:10:55.972 11:34:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.972 11:34:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.972 11:34:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.972 11:34:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.972 11:34:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:55.972 11:34:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.972 11:34:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.972 11:34:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.972 11:34:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.973 11:34:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.973 11:34:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.973 11:34:26 -- paths/export.sh@5 -- # export PATH 00:10:55.973 11:34:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.973 11:34:26 -- nvmf/common.sh@47 -- # : 0 00:10:55.973 11:34:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:55.973 11:34:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:55.973 11:34:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.973 11:34:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.973 11:34:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.973 11:34:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:55.973 11:34:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:55.973 11:34:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.973 11:34:26 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:55.973 11:34:26 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:55.973 11:34:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.973 11:34:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:55.973 11:34:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:55.973 11:34:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:55.973 11:34:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.973 11:34:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.973 11:34:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.973 11:34:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:55.973 11:34:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:55.973 11:34:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:55.973 11:34:26 -- common/autotest_common.sh@10 -- # set +x 00:11:01.242 11:34:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:01.242 11:34:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:01.242 11:34:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:01.242 11:34:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:01.242 11:34:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:01.242 11:34:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:01.242 11:34:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:01.242 11:34:31 -- nvmf/common.sh@295 -- # net_devs=() 00:11:01.242 11:34:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:01.242 11:34:31 -- nvmf/common.sh@296 -- # e810=() 00:11:01.242 11:34:31 -- nvmf/common.sh@296 -- # local -ga e810 00:11:01.242 11:34:31 -- nvmf/common.sh@297 -- # x722=() 00:11:01.242 11:34:31 -- nvmf/common.sh@297 -- # local -ga x722 00:11:01.242 11:34:31 -- nvmf/common.sh@298 -- # mlx=() 00:11:01.242 11:34:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:01.242 11:34:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.242 11:34:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.242 11:34:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.242 11:34:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.242 11:34:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.242 11:34:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.242 11:34:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.242 11:34:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.242 11:34:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.242 11:34:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.242 11:34:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.242 11:34:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:01.242 11:34:31 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:01.242 11:34:31 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:01.242 11:34:31 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:01.242 11:34:31 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:01.242 11:34:31 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:01.242 11:34:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:01.242 11:34:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.242 11:34:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:01.242 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:01.242 11:34:31 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:01.242 11:34:31 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:01.242 11:34:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:01.242 11:34:31 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:01.242 11:34:31 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:01.242 11:34:31 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:01.242 11:34:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.242 11:34:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:01.242 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:01.242 11:34:31 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:01.242 11:34:31 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:01.242 11:34:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:01.242 11:34:31 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:01.243 11:34:31 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:01.243 11:34:31 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:01.243 11:34:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:01.243 11:34:31 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:01.243 11:34:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.243 11:34:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.243 11:34:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:01.243 11:34:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.243 11:34:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:01.243 Found net devices under 0000:18:00.0: mlx_0_0 00:11:01.243 11:34:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.243 11:34:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.243 11:34:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.243 11:34:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:01.243 11:34:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.243 11:34:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:01.243 Found net devices under 0000:18:00.1: mlx_0_1 00:11:01.243 11:34:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.243 11:34:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:01.243 11:34:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:01.243 11:34:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:01.243 11:34:31 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:01.243 11:34:31 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:01.243 11:34:31 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:01.243 11:34:31 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:01.243 11:34:31 -- nvmf/common.sh@58 -- # uname 00:11:01.243 11:34:31 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:01.243 11:34:31 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:01.243 11:34:31 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:01.243 11:34:31 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:01.243 11:34:31 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:01.243 11:34:31 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:01.243 11:34:31 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:01.243 11:34:31 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:01.243 11:34:31 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:01.243 11:34:31 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:01.243 11:34:31 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:01.243 11:34:31 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:01.243 11:34:31 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:01.243 11:34:31 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:01.243 11:34:31 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:01.243 11:34:31 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:01.243 11:34:31 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:01.243 11:34:31 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.243 11:34:31 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:01.243 11:34:31 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:01.243 11:34:31 -- nvmf/common.sh@105 -- # continue 2 00:11:01.243 11:34:31 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:01.243 11:34:31 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.243 11:34:31 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:01.243 11:34:31 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.243 11:34:31 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:01.243 11:34:31 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:01.243 11:34:31 -- nvmf/common.sh@105 -- # continue 2 00:11:01.243 11:34:31 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:01.243 11:34:31 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:01.243 11:34:31 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:01.243 11:34:31 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:01.243 11:34:31 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:01.243 11:34:31 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:01.243 11:34:31 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:01.243 11:34:31 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:01.243 11:34:31 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:01.243 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:01.243 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:11:01.243 altname enp24s0f0np0 00:11:01.243 altname ens785f0np0 00:11:01.243 inet 192.168.100.8/24 scope global mlx_0_0 00:11:01.243 valid_lft forever preferred_lft forever 00:11:01.243 11:34:31 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:01.243 11:34:31 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:01.243 11:34:31 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:01.243 11:34:31 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:01.243 11:34:31 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:01.243 11:34:31 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:01.243 11:34:31 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:01.243 11:34:31 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:01.243 11:34:31 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:01.243 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:01.243 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:11:01.243 altname enp24s0f1np1 00:11:01.243 altname ens785f1np1 00:11:01.243 inet 192.168.100.9/24 scope global mlx_0_1 00:11:01.243 valid_lft forever preferred_lft forever 00:11:01.243 11:34:31 -- nvmf/common.sh@411 -- # return 0 00:11:01.243 11:34:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:01.243 11:34:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:01.243 11:34:31 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:01.243 11:34:31 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:01.243 11:34:31 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:01.243 11:34:31 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:01.243 11:34:31 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:01.243 11:34:31 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:01.243 11:34:31 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:01.243 11:34:31 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:01.243 11:34:31 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:01.243 11:34:31 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.243 11:34:31 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:01.243 11:34:31 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:01.243 11:34:31 -- nvmf/common.sh@105 -- # continue 2 00:11:01.243 11:34:31 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:01.243 11:34:31 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.243 11:34:31 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:01.243 11:34:31 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.243 11:34:31 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:01.243 11:34:31 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:01.243 11:34:31 -- nvmf/common.sh@105 -- # continue 2 00:11:01.243 11:34:31 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:01.243 11:34:31 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:01.243 11:34:31 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:01.243 11:34:31 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:01.243 11:34:31 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:01.243 11:34:31 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:01.243 11:34:31 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:01.243 11:34:31 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:01.243 11:34:31 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:01.243 11:34:31 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:01.243 11:34:31 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:01.243 11:34:31 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:01.243 11:34:31 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:01.243 192.168.100.9' 00:11:01.243 11:34:31 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:01.243 192.168.100.9' 00:11:01.243 11:34:31 -- nvmf/common.sh@446 -- # head -n 1 00:11:01.243 11:34:31 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:01.243 11:34:31 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:01.243 192.168.100.9' 00:11:01.243 11:34:31 -- nvmf/common.sh@447 -- # tail -n +2 00:11:01.243 11:34:31 -- nvmf/common.sh@447 -- # head -n 1 00:11:01.243 11:34:31 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:01.243 11:34:31 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:01.243 11:34:31 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:01.243 11:34:31 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:01.243 11:34:31 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:01.243 11:34:31 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:01.243 11:34:31 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:01.243 11:34:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:01.243 11:34:31 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:01.243 11:34:31 -- common/autotest_common.sh@10 -- # set +x 00:11:01.243 11:34:31 -- nvmf/common.sh@470 -- # nvmfpid=2966007 00:11:01.243 11:34:31 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:01.243 11:34:31 -- nvmf/common.sh@471 -- # waitforlisten 2966007 00:11:01.243 11:34:31 -- common/autotest_common.sh@827 -- # '[' -z 2966007 ']' 00:11:01.243 11:34:31 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.243 11:34:31 -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:01.243 11:34:31 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.243 11:34:31 -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:01.243 11:34:31 -- common/autotest_common.sh@10 -- # set +x 00:11:01.243 [2024-05-15 11:34:31.988292] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:11:01.243 [2024-05-15 11:34:31.988346] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.502 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.502 [2024-05-15 11:34:32.057918] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:01.502 [2024-05-15 11:34:32.149728] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.502 [2024-05-15 11:34:32.149768] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.502 [2024-05-15 11:34:32.149779] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.502 [2024-05-15 11:34:32.149788] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.502 [2024-05-15 11:34:32.149796] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.502 [2024-05-15 11:34:32.149845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.502 [2024-05-15 11:34:32.149848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.069 11:34:32 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:02.069 11:34:32 -- common/autotest_common.sh@860 -- # return 0 00:11:02.069 11:34:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:02.069 11:34:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:02.069 11:34:32 -- common/autotest_common.sh@10 -- # set +x 00:11:02.328 11:34:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.328 11:34:32 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:02.328 11:34:32 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.328 11:34:32 -- common/autotest_common.sh@10 -- # set +x 00:11:02.328 [2024-05-15 11:34:32.861746] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x141e930/0x1422e20) succeed. 00:11:02.328 [2024-05-15 11:34:32.870842] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x141fe30/0x14644b0) succeed. 00:11:02.328 11:34:32 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.328 11:34:32 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:02.328 11:34:32 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.328 11:34:32 -- common/autotest_common.sh@10 -- # set +x 00:11:02.328 11:34:32 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.328 11:34:32 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:02.328 11:34:32 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.328 11:34:32 -- common/autotest_common.sh@10 -- # set +x 00:11:02.328 [2024-05-15 11:34:32.965772] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:02.328 [2024-05-15 11:34:32.966147] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:02.328 11:34:32 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.328 11:34:32 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:02.328 11:34:32 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.328 11:34:32 -- common/autotest_common.sh@10 -- # set +x 00:11:02.328 NULL1 00:11:02.328 11:34:32 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.328 11:34:32 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:02.328 11:34:32 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.328 11:34:32 -- common/autotest_common.sh@10 -- # set +x 00:11:02.328 Delay0 00:11:02.328 11:34:32 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.328 11:34:32 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.328 11:34:32 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.328 11:34:32 -- common/autotest_common.sh@10 -- # set +x 00:11:02.328 11:34:32 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.329 11:34:32 -- target/delete_subsystem.sh@28 -- # perf_pid=2966204 00:11:02.329 11:34:32 -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:02.329 11:34:32 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:02.329 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.329 [2024-05-15 11:34:33.068995] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:04.862 11:34:34 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.862 11:34:34 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.862 11:34:34 -- common/autotest_common.sh@10 -- # set +x 00:11:05.429 NVMe io qpair process completion error 00:11:05.429 NVMe io qpair process completion error 00:11:05.429 NVMe io qpair process completion error 00:11:05.429 NVMe io qpair process completion error 00:11:05.429 NVMe io qpair process completion error 00:11:05.429 NVMe io qpair process completion error 00:11:05.429 11:34:36 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.429 11:34:36 -- target/delete_subsystem.sh@34 -- # delay=0 00:11:05.429 11:34:36 -- target/delete_subsystem.sh@35 -- # kill -0 2966204 00:11:05.429 11:34:36 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:06.003 11:34:36 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:06.003 11:34:36 -- target/delete_subsystem.sh@35 -- # kill -0 2966204 00:11:06.003 11:34:36 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 starting I/O failed: -6 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 Read completed with error (sct=0, sc=8) 00:11:06.570 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 starting I/O failed: -6 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Write completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.571 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 Write completed with error (sct=0, sc=8) 00:11:06.572 Read completed with error (sct=0, sc=8) 00:11:06.572 11:34:37 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:06.572 11:34:37 -- target/delete_subsystem.sh@35 -- # kill -0 2966204 00:11:06.572 11:34:37 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:06.572 [2024-05-15 11:34:37.167151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:11:06.572 [2024-05-15 11:34:37.167196] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:06.572 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:06.572 Initializing NVMe Controllers 00:11:06.572 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:06.572 Controller IO queue size 128, less than required. 00:11:06.572 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:06.572 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:06.572 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:06.572 Initialization complete. Launching workers. 00:11:06.572 ======================================================== 00:11:06.572 Latency(us) 00:11:06.572 Device Information : IOPS MiB/s Average min max 00:11:06.572 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.45 0.04 1594238.79 1000068.37 2977416.27 00:11:06.572 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.45 0.04 1595581.48 1001183.74 2978448.92 00:11:06.572 ======================================================== 00:11:06.572 Total : 160.91 0.08 1594910.14 1000068.37 2978448.92 00:11:06.572 00:11:07.138 11:34:37 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:07.138 11:34:37 -- target/delete_subsystem.sh@35 -- # kill -0 2966204 00:11:07.138 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2966204) - No such process 00:11:07.138 11:34:37 -- target/delete_subsystem.sh@45 -- # NOT wait 2966204 00:11:07.138 11:34:37 -- common/autotest_common.sh@648 -- # local es=0 00:11:07.138 11:34:37 -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2966204 00:11:07.138 11:34:37 -- common/autotest_common.sh@636 -- # local arg=wait 00:11:07.138 11:34:37 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:07.138 11:34:37 -- common/autotest_common.sh@640 -- # type -t wait 00:11:07.138 11:34:37 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:07.138 11:34:37 -- common/autotest_common.sh@651 -- # wait 2966204 00:11:07.138 11:34:37 -- common/autotest_common.sh@651 -- # es=1 00:11:07.138 11:34:37 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:07.138 11:34:37 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:07.138 11:34:37 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:07.138 11:34:37 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:07.138 11:34:37 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.138 11:34:37 -- common/autotest_common.sh@10 -- # set +x 00:11:07.138 11:34:37 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.138 11:34:37 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:07.138 11:34:37 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.138 11:34:37 -- common/autotest_common.sh@10 -- # set +x 00:11:07.138 [2024-05-15 11:34:37.682409] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:07.138 11:34:37 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.138 11:34:37 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.138 11:34:37 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.138 11:34:37 -- common/autotest_common.sh@10 -- # set +x 00:11:07.138 11:34:37 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.138 11:34:37 -- target/delete_subsystem.sh@54 -- # perf_pid=2966792 00:11:07.138 11:34:37 -- target/delete_subsystem.sh@56 -- # delay=0 00:11:07.138 11:34:37 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:07.138 11:34:37 -- target/delete_subsystem.sh@57 -- # kill -0 2966792 00:11:07.138 11:34:37 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:07.138 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.138 [2024-05-15 11:34:37.773321] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:07.704 11:34:38 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:07.704 11:34:38 -- target/delete_subsystem.sh@57 -- # kill -0 2966792 00:11:07.704 11:34:38 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:07.962 11:34:38 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:07.962 11:34:38 -- target/delete_subsystem.sh@57 -- # kill -0 2966792 00:11:07.962 11:34:38 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:08.530 11:34:39 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:08.530 11:34:39 -- target/delete_subsystem.sh@57 -- # kill -0 2966792 00:11:08.530 11:34:39 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:09.096 11:34:39 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:09.096 11:34:39 -- target/delete_subsystem.sh@57 -- # kill -0 2966792 00:11:09.096 11:34:39 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:09.664 11:34:40 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:09.664 11:34:40 -- target/delete_subsystem.sh@57 -- # kill -0 2966792 00:11:09.664 11:34:40 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:10.232 11:34:40 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:10.232 11:34:40 -- target/delete_subsystem.sh@57 -- # kill -0 2966792 00:11:10.232 11:34:40 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:10.490 11:34:41 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:10.490 11:34:41 -- target/delete_subsystem.sh@57 -- # kill -0 2966792 00:11:10.490 11:34:41 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:11.056 11:34:41 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:11.056 11:34:41 -- target/delete_subsystem.sh@57 -- # kill -0 2966792 00:11:11.056 11:34:41 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:11.623 11:34:42 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:11.623 11:34:42 -- target/delete_subsystem.sh@57 -- # kill -0 2966792 00:11:11.623 11:34:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:12.190 11:34:42 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:12.190 11:34:42 -- target/delete_subsystem.sh@57 -- # kill -0 2966792 00:11:12.190 11:34:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:12.756 11:34:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:12.756 11:34:43 -- target/delete_subsystem.sh@57 -- # kill -0 2966792 00:11:12.756 11:34:43 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:13.014 11:34:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:13.014 11:34:43 -- target/delete_subsystem.sh@57 -- # kill -0 2966792 00:11:13.014 11:34:43 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:13.581 11:34:44 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:13.581 11:34:44 -- target/delete_subsystem.sh@57 -- # kill -0 2966792 00:11:13.581 11:34:44 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:14.147 11:34:44 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:14.147 11:34:44 -- target/delete_subsystem.sh@57 -- # kill -0 2966792 00:11:14.147 11:34:44 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:14.408 Initializing NVMe Controllers 00:11:14.408 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:14.408 Controller IO queue size 128, less than required. 00:11:14.408 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:14.408 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:14.408 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:14.408 Initialization complete. Launching workers. 00:11:14.408 ======================================================== 00:11:14.408 Latency(us) 00:11:14.408 Device Information : IOPS MiB/s Average min max 00:11:14.408 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001284.45 1000057.64 1003997.87 00:11:14.408 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002422.07 1000075.93 1005732.19 00:11:14.408 ======================================================== 00:11:14.408 Total : 256.00 0.12 1001853.26 1000057.64 1005732.19 00:11:14.408 00:11:14.667 11:34:45 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:14.667 11:34:45 -- target/delete_subsystem.sh@57 -- # kill -0 2966792 00:11:14.667 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2966792) - No such process 00:11:14.667 11:34:45 -- target/delete_subsystem.sh@67 -- # wait 2966792 00:11:14.667 11:34:45 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:14.667 11:34:45 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:14.667 11:34:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:14.667 11:34:45 -- nvmf/common.sh@117 -- # sync 00:11:14.667 11:34:45 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:14.667 11:34:45 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:14.667 11:34:45 -- nvmf/common.sh@120 -- # set +e 00:11:14.667 11:34:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:14.667 11:34:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:14.667 rmmod nvme_rdma 00:11:14.667 rmmod nvme_fabrics 00:11:14.667 11:34:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:14.667 11:34:45 -- nvmf/common.sh@124 -- # set -e 00:11:14.667 11:34:45 -- nvmf/common.sh@125 -- # return 0 00:11:14.667 11:34:45 -- nvmf/common.sh@478 -- # '[' -n 2966007 ']' 00:11:14.667 11:34:45 -- nvmf/common.sh@479 -- # killprocess 2966007 00:11:14.667 11:34:45 -- common/autotest_common.sh@946 -- # '[' -z 2966007 ']' 00:11:14.667 11:34:45 -- common/autotest_common.sh@950 -- # kill -0 2966007 00:11:14.667 11:34:45 -- common/autotest_common.sh@951 -- # uname 00:11:14.667 11:34:45 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:14.667 11:34:45 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2966007 00:11:14.667 11:34:45 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:14.667 11:34:45 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:14.667 11:34:45 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2966007' 00:11:14.667 killing process with pid 2966007 00:11:14.667 11:34:45 -- common/autotest_common.sh@965 -- # kill 2966007 00:11:14.667 [2024-05-15 11:34:45.371415] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:14.667 11:34:45 -- common/autotest_common.sh@970 -- # wait 2966007 00:11:14.667 [2024-05-15 11:34:45.425931] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:14.926 11:34:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:14.926 11:34:45 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:14.926 00:11:14.926 real 0m19.228s 00:11:14.926 user 0m49.645s 00:11:14.926 sys 0m5.444s 00:11:14.926 11:34:45 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:14.926 11:34:45 -- common/autotest_common.sh@10 -- # set +x 00:11:14.926 ************************************ 00:11:14.926 END TEST nvmf_delete_subsystem 00:11:14.926 ************************************ 00:11:15.184 11:34:45 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:11:15.184 11:34:45 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:15.184 11:34:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:15.184 11:34:45 -- common/autotest_common.sh@10 -- # set +x 00:11:15.184 ************************************ 00:11:15.184 START TEST nvmf_ns_masking 00:11:15.184 ************************************ 00:11:15.184 11:34:45 -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:11:15.184 * Looking for test storage... 00:11:15.184 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:15.184 11:34:45 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.184 11:34:45 -- nvmf/common.sh@7 -- # uname -s 00:11:15.184 11:34:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.184 11:34:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.184 11:34:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.184 11:34:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.184 11:34:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.184 11:34:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.184 11:34:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.184 11:34:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.184 11:34:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.184 11:34:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.184 11:34:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:11:15.184 11:34:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:11:15.184 11:34:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.184 11:34:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.184 11:34:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.184 11:34:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.184 11:34:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:15.184 11:34:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.184 11:34:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.184 11:34:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.185 11:34:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.185 11:34:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.185 11:34:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.185 11:34:45 -- paths/export.sh@5 -- # export PATH 00:11:15.185 11:34:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.185 11:34:45 -- nvmf/common.sh@47 -- # : 0 00:11:15.185 11:34:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:15.185 11:34:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:15.185 11:34:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.185 11:34:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.185 11:34:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.185 11:34:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:15.185 11:34:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:15.185 11:34:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:15.185 11:34:45 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:15.185 11:34:45 -- target/ns_masking.sh@11 -- # loops=5 00:11:15.185 11:34:45 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:15.185 11:34:45 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:15.185 11:34:45 -- target/ns_masking.sh@15 -- # uuidgen 00:11:15.185 11:34:45 -- target/ns_masking.sh@15 -- # HOSTID=85683b9e-02a2-4c59-88b9-7832e3bde2c8 00:11:15.185 11:34:45 -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:15.185 11:34:45 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:15.185 11:34:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.185 11:34:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:15.185 11:34:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:15.185 11:34:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:15.185 11:34:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.185 11:34:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.185 11:34:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.185 11:34:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:15.185 11:34:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:15.185 11:34:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:15.185 11:34:45 -- common/autotest_common.sh@10 -- # set +x 00:11:21.744 11:34:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:21.744 11:34:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:21.744 11:34:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:21.744 11:34:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:21.744 11:34:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:21.744 11:34:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:21.744 11:34:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:21.744 11:34:51 -- nvmf/common.sh@295 -- # net_devs=() 00:11:21.744 11:34:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:21.744 11:34:51 -- nvmf/common.sh@296 -- # e810=() 00:11:21.744 11:34:51 -- nvmf/common.sh@296 -- # local -ga e810 00:11:21.744 11:34:51 -- nvmf/common.sh@297 -- # x722=() 00:11:21.744 11:34:51 -- nvmf/common.sh@297 -- # local -ga x722 00:11:21.744 11:34:51 -- nvmf/common.sh@298 -- # mlx=() 00:11:21.744 11:34:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:21.744 11:34:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.744 11:34:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.744 11:34:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.744 11:34:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.744 11:34:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.744 11:34:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.744 11:34:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.744 11:34:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.744 11:34:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.744 11:34:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.744 11:34:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.744 11:34:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:21.744 11:34:51 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:21.744 11:34:51 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:21.744 11:34:51 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:21.744 11:34:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:21.744 11:34:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.744 11:34:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:21.744 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:21.744 11:34:51 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:21.744 11:34:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.744 11:34:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:21.744 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:21.744 11:34:51 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:21.744 11:34:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:21.744 11:34:51 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.744 11:34:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.744 11:34:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:21.744 11:34:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.744 11:34:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:21.744 Found net devices under 0000:18:00.0: mlx_0_0 00:11:21.744 11:34:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.744 11:34:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.744 11:34:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.744 11:34:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:21.744 11:34:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.744 11:34:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:21.744 Found net devices under 0000:18:00.1: mlx_0_1 00:11:21.744 11:34:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.744 11:34:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:21.744 11:34:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:21.744 11:34:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:21.744 11:34:51 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:21.744 11:34:51 -- nvmf/common.sh@58 -- # uname 00:11:21.744 11:34:51 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:21.744 11:34:51 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:21.744 11:34:51 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:21.744 11:34:51 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:21.744 11:34:51 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:21.744 11:34:51 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:21.744 11:34:51 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:21.744 11:34:51 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:21.744 11:34:51 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:21.744 11:34:51 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:21.744 11:34:51 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:21.744 11:34:51 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:21.744 11:34:51 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:21.744 11:34:51 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:21.744 11:34:51 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:21.744 11:34:51 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:21.744 11:34:51 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:21.744 11:34:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.744 11:34:51 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:21.744 11:34:51 -- nvmf/common.sh@105 -- # continue 2 00:11:21.744 11:34:51 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:21.744 11:34:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.744 11:34:51 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.744 11:34:51 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:21.744 11:34:51 -- nvmf/common.sh@105 -- # continue 2 00:11:21.744 11:34:51 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:21.744 11:34:51 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:21.744 11:34:51 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:21.744 11:34:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:21.744 11:34:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:21.744 11:34:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:21.744 11:34:51 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:21.744 11:34:51 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:21.744 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:21.744 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:11:21.744 altname enp24s0f0np0 00:11:21.744 altname ens785f0np0 00:11:21.744 inet 192.168.100.8/24 scope global mlx_0_0 00:11:21.744 valid_lft forever preferred_lft forever 00:11:21.744 11:34:51 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:21.744 11:34:51 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:21.744 11:34:51 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:21.744 11:34:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:21.744 11:34:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:21.744 11:34:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:21.744 11:34:51 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:21.744 11:34:51 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:21.744 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:21.744 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:11:21.744 altname enp24s0f1np1 00:11:21.744 altname ens785f1np1 00:11:21.744 inet 192.168.100.9/24 scope global mlx_0_1 00:11:21.744 valid_lft forever preferred_lft forever 00:11:21.744 11:34:51 -- nvmf/common.sh@411 -- # return 0 00:11:21.744 11:34:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:21.744 11:34:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:21.744 11:34:51 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:21.744 11:34:51 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:21.744 11:34:51 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:21.744 11:34:51 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:21.744 11:34:51 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:21.744 11:34:51 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:21.744 11:34:51 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:21.745 11:34:51 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:21.745 11:34:51 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:21.745 11:34:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.745 11:34:51 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:21.745 11:34:51 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:21.745 11:34:51 -- nvmf/common.sh@105 -- # continue 2 00:11:21.745 11:34:51 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:21.745 11:34:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.745 11:34:51 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:21.745 11:34:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.745 11:34:51 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:21.745 11:34:51 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:21.745 11:34:51 -- nvmf/common.sh@105 -- # continue 2 00:11:21.745 11:34:51 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:21.745 11:34:51 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:21.745 11:34:51 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:21.745 11:34:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:21.745 11:34:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:21.745 11:34:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:21.745 11:34:51 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:21.745 11:34:51 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:21.745 11:34:51 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:21.745 11:34:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:21.745 11:34:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:21.745 11:34:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:21.745 11:34:51 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:21.745 192.168.100.9' 00:11:21.745 11:34:51 -- nvmf/common.sh@446 -- # head -n 1 00:11:21.745 11:34:51 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:21.745 192.168.100.9' 00:11:21.745 11:34:51 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:21.745 11:34:51 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:21.745 192.168.100.9' 00:11:21.745 11:34:51 -- nvmf/common.sh@447 -- # tail -n +2 00:11:21.745 11:34:51 -- nvmf/common.sh@447 -- # head -n 1 00:11:21.745 11:34:51 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:21.745 11:34:51 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:21.745 11:34:51 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:21.745 11:34:51 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:21.745 11:34:51 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:21.745 11:34:51 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:21.745 11:34:51 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:21.745 11:34:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:21.745 11:34:51 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:21.745 11:34:51 -- common/autotest_common.sh@10 -- # set +x 00:11:21.745 11:34:51 -- nvmf/common.sh@470 -- # nvmfpid=2970759 00:11:21.745 11:34:51 -- nvmf/common.sh@471 -- # waitforlisten 2970759 00:11:21.745 11:34:51 -- common/autotest_common.sh@827 -- # '[' -z 2970759 ']' 00:11:21.745 11:34:51 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.745 11:34:51 -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:21.745 11:34:51 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.745 11:34:51 -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:21.745 11:34:51 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:21.745 11:34:51 -- common/autotest_common.sh@10 -- # set +x 00:11:21.745 [2024-05-15 11:34:51.966587] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:11:21.745 [2024-05-15 11:34:51.966642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.745 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.745 [2024-05-15 11:34:52.037464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.745 [2024-05-15 11:34:52.126449] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.745 [2024-05-15 11:34:52.126504] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.745 [2024-05-15 11:34:52.126514] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.745 [2024-05-15 11:34:52.126522] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.745 [2024-05-15 11:34:52.126529] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.745 [2024-05-15 11:34:52.126623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.745 [2024-05-15 11:34:52.126707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.745 [2024-05-15 11:34:52.126790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.745 [2024-05-15 11:34:52.126791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.312 11:34:52 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:22.312 11:34:52 -- common/autotest_common.sh@860 -- # return 0 00:11:22.312 11:34:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:22.312 11:34:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.312 11:34:52 -- common/autotest_common.sh@10 -- # set +x 00:11:22.312 11:34:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.312 11:34:52 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:22.312 [2024-05-15 11:34:53.033215] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ac9f00/0x1ace3f0) succeed. 00:11:22.313 [2024-05-15 11:34:53.043733] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1acb540/0x1b0fa80) succeed. 00:11:22.570 11:34:53 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:22.570 11:34:53 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:22.570 11:34:53 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:22.827 Malloc1 00:11:22.827 11:34:53 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:22.827 Malloc2 00:11:22.827 11:34:53 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:23.085 11:34:53 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:23.343 11:34:53 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:23.343 [2024-05-15 11:34:54.096811] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:23.343 [2024-05-15 11:34:54.097231] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:23.601 11:34:54 -- target/ns_masking.sh@61 -- # connect 00:11:23.601 11:34:54 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 85683b9e-02a2-4c59-88b9-7832e3bde2c8 -a 192.168.100.8 -s 4420 -i 4 00:11:23.860 11:34:54 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:23.860 11:34:54 -- common/autotest_common.sh@1194 -- # local i=0 00:11:23.860 11:34:54 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:23.860 11:34:54 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:23.860 11:34:54 -- common/autotest_common.sh@1201 -- # sleep 2 00:11:25.850 11:34:56 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:25.850 11:34:56 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:25.850 11:34:56 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.850 11:34:56 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:25.850 11:34:56 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.850 11:34:56 -- common/autotest_common.sh@1204 -- # return 0 00:11:25.850 11:34:56 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:25.850 11:34:56 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:25.850 11:34:56 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:25.850 11:34:56 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:25.850 11:34:56 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:25.850 11:34:56 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:25.850 11:34:56 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:25.850 [ 0]:0x1 00:11:25.850 11:34:56 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:25.850 11:34:56 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:25.850 11:34:56 -- target/ns_masking.sh@40 -- # nguid=2f0b38a80c8840de8b5707071bd71d75 00:11:25.850 11:34:56 -- target/ns_masking.sh@41 -- # [[ 2f0b38a80c8840de8b5707071bd71d75 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.850 11:34:56 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:26.109 11:34:56 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:26.109 11:34:56 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:26.109 11:34:56 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:26.109 [ 0]:0x1 00:11:26.109 11:34:56 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:26.109 11:34:56 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:26.109 11:34:56 -- target/ns_masking.sh@40 -- # nguid=2f0b38a80c8840de8b5707071bd71d75 00:11:26.109 11:34:56 -- target/ns_masking.sh@41 -- # [[ 2f0b38a80c8840de8b5707071bd71d75 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:26.109 11:34:56 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:26.109 11:34:56 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:26.109 11:34:56 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:26.109 [ 1]:0x2 00:11:26.109 11:34:56 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:26.109 11:34:56 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:26.109 11:34:56 -- target/ns_masking.sh@40 -- # nguid=f27131eb7bd04078bb9a1700a26625d3 00:11:26.109 11:34:56 -- target/ns_masking.sh@41 -- # [[ f27131eb7bd04078bb9a1700a26625d3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:26.109 11:34:56 -- target/ns_masking.sh@69 -- # disconnect 00:11:26.109 11:34:56 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:26.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.676 11:34:57 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.676 11:34:57 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:26.935 11:34:57 -- target/ns_masking.sh@77 -- # connect 1 00:11:26.935 11:34:57 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 85683b9e-02a2-4c59-88b9-7832e3bde2c8 -a 192.168.100.8 -s 4420 -i 4 00:11:27.193 11:34:57 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:27.193 11:34:57 -- common/autotest_common.sh@1194 -- # local i=0 00:11:27.193 11:34:57 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.193 11:34:57 -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:11:27.193 11:34:57 -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:11:27.193 11:34:57 -- common/autotest_common.sh@1201 -- # sleep 2 00:11:29.754 11:34:59 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:29.754 11:34:59 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:29.754 11:34:59 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.754 11:34:59 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:29.754 11:34:59 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.754 11:34:59 -- common/autotest_common.sh@1204 -- # return 0 00:11:29.754 11:34:59 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:29.754 11:34:59 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:29.754 11:34:59 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:29.754 11:34:59 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:29.754 11:34:59 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:29.754 11:34:59 -- common/autotest_common.sh@648 -- # local es=0 00:11:29.754 11:34:59 -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:29.754 11:34:59 -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:29.754 11:34:59 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:29.754 11:34:59 -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:29.754 11:34:59 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:29.754 11:34:59 -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:29.754 11:34:59 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:29.754 11:34:59 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:29.754 11:35:00 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:29.754 11:35:00 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:29.754 11:35:00 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:29.754 11:35:00 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:29.754 11:35:00 -- common/autotest_common.sh@651 -- # es=1 00:11:29.754 11:35:00 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:29.754 11:35:00 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:29.754 11:35:00 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:29.754 11:35:00 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:29.754 11:35:00 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:29.754 11:35:00 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:29.754 [ 0]:0x2 00:11:29.754 11:35:00 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:29.754 11:35:00 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:29.754 11:35:00 -- target/ns_masking.sh@40 -- # nguid=f27131eb7bd04078bb9a1700a26625d3 00:11:29.754 11:35:00 -- target/ns_masking.sh@41 -- # [[ f27131eb7bd04078bb9a1700a26625d3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:29.754 11:35:00 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:29.754 11:35:00 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:29.754 11:35:00 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:29.754 11:35:00 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:29.754 [ 0]:0x1 00:11:29.754 11:35:00 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:29.754 11:35:00 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:29.754 11:35:00 -- target/ns_masking.sh@40 -- # nguid=2f0b38a80c8840de8b5707071bd71d75 00:11:29.754 11:35:00 -- target/ns_masking.sh@41 -- # [[ 2f0b38a80c8840de8b5707071bd71d75 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:29.754 11:35:00 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:29.754 11:35:00 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:29.754 11:35:00 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:29.754 [ 1]:0x2 00:11:29.754 11:35:00 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:29.754 11:35:00 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:29.754 11:35:00 -- target/ns_masking.sh@40 -- # nguid=f27131eb7bd04078bb9a1700a26625d3 00:11:29.754 11:35:00 -- target/ns_masking.sh@41 -- # [[ f27131eb7bd04078bb9a1700a26625d3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:29.754 11:35:00 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:30.014 11:35:00 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:30.014 11:35:00 -- common/autotest_common.sh@648 -- # local es=0 00:11:30.014 11:35:00 -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:30.014 11:35:00 -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:30.014 11:35:00 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.014 11:35:00 -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:30.014 11:35:00 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.014 11:35:00 -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:30.014 11:35:00 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:30.014 11:35:00 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:30.014 11:35:00 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:30.014 11:35:00 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:30.014 11:35:00 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:30.014 11:35:00 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:30.014 11:35:00 -- common/autotest_common.sh@651 -- # es=1 00:11:30.014 11:35:00 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:30.014 11:35:00 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:30.014 11:35:00 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:30.014 11:35:00 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:30.014 11:35:00 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:30.014 11:35:00 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:30.014 [ 0]:0x2 00:11:30.014 11:35:00 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:30.014 11:35:00 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:30.014 11:35:00 -- target/ns_masking.sh@40 -- # nguid=f27131eb7bd04078bb9a1700a26625d3 00:11:30.014 11:35:00 -- target/ns_masking.sh@41 -- # [[ f27131eb7bd04078bb9a1700a26625d3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:30.014 11:35:00 -- target/ns_masking.sh@91 -- # disconnect 00:11:30.014 11:35:00 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.273 11:35:01 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:30.531 11:35:01 -- target/ns_masking.sh@95 -- # connect 2 00:11:30.531 11:35:01 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 85683b9e-02a2-4c59-88b9-7832e3bde2c8 -a 192.168.100.8 -s 4420 -i 4 00:11:30.789 11:35:01 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:30.789 11:35:01 -- common/autotest_common.sh@1194 -- # local i=0 00:11:30.789 11:35:01 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.790 11:35:01 -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:11:30.790 11:35:01 -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:11:30.790 11:35:01 -- common/autotest_common.sh@1201 -- # sleep 2 00:11:33.322 11:35:03 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:33.322 11:35:03 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:33.322 11:35:03 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:33.322 11:35:03 -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:11:33.322 11:35:03 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:33.322 11:35:03 -- common/autotest_common.sh@1204 -- # return 0 00:11:33.322 11:35:03 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:33.322 11:35:03 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:33.322 11:35:03 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:33.322 11:35:03 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:33.322 11:35:03 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:33.322 11:35:03 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:33.322 11:35:03 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:33.322 [ 0]:0x1 00:11:33.322 11:35:03 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:33.322 11:35:03 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:33.322 11:35:03 -- target/ns_masking.sh@40 -- # nguid=2f0b38a80c8840de8b5707071bd71d75 00:11:33.322 11:35:03 -- target/ns_masking.sh@41 -- # [[ 2f0b38a80c8840de8b5707071bd71d75 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:33.322 11:35:03 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:33.322 11:35:03 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:33.322 11:35:03 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:33.322 [ 1]:0x2 00:11:33.322 11:35:03 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:33.322 11:35:03 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:33.322 11:35:03 -- target/ns_masking.sh@40 -- # nguid=f27131eb7bd04078bb9a1700a26625d3 00:11:33.322 11:35:03 -- target/ns_masking.sh@41 -- # [[ f27131eb7bd04078bb9a1700a26625d3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:33.322 11:35:03 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:33.322 11:35:03 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:33.322 11:35:03 -- common/autotest_common.sh@648 -- # local es=0 00:11:33.322 11:35:03 -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:33.322 11:35:03 -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:33.322 11:35:03 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.322 11:35:03 -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:33.322 11:35:03 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.322 11:35:03 -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:33.322 11:35:03 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:33.322 11:35:03 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:33.322 11:35:03 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:33.322 11:35:03 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:33.322 11:35:03 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:33.322 11:35:03 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:33.322 11:35:03 -- common/autotest_common.sh@651 -- # es=1 00:11:33.322 11:35:03 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:33.322 11:35:03 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:33.322 11:35:03 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:33.322 11:35:03 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:33.322 11:35:03 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:33.322 11:35:03 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:33.322 [ 0]:0x2 00:11:33.322 11:35:03 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:33.322 11:35:03 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:33.322 11:35:03 -- target/ns_masking.sh@40 -- # nguid=f27131eb7bd04078bb9a1700a26625d3 00:11:33.322 11:35:03 -- target/ns_masking.sh@41 -- # [[ f27131eb7bd04078bb9a1700a26625d3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:33.322 11:35:03 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:33.322 11:35:03 -- common/autotest_common.sh@648 -- # local es=0 00:11:33.322 11:35:03 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:33.322 11:35:03 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:33.322 11:35:03 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.322 11:35:03 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:33.322 11:35:03 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.322 11:35:03 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:33.322 11:35:03 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.322 11:35:03 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:33.322 11:35:03 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:33.322 11:35:03 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:33.580 [2024-05-15 11:35:04.130634] nvmf_rpc.c:1776:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:33.580 request: 00:11:33.580 { 00:11:33.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:33.580 "nsid": 2, 00:11:33.580 "host": "nqn.2016-06.io.spdk:host1", 00:11:33.580 "method": "nvmf_ns_remove_host", 00:11:33.580 "req_id": 1 00:11:33.580 } 00:11:33.580 Got JSON-RPC error response 00:11:33.580 response: 00:11:33.580 { 00:11:33.581 "code": -32602, 00:11:33.581 "message": "Invalid parameters" 00:11:33.581 } 00:11:33.581 11:35:04 -- common/autotest_common.sh@651 -- # es=1 00:11:33.581 11:35:04 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:33.581 11:35:04 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:33.581 11:35:04 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:33.581 11:35:04 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:33.581 11:35:04 -- common/autotest_common.sh@648 -- # local es=0 00:11:33.581 11:35:04 -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:33.581 11:35:04 -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:33.581 11:35:04 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.581 11:35:04 -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:33.581 11:35:04 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.581 11:35:04 -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:33.581 11:35:04 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:33.581 11:35:04 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:33.581 11:35:04 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:33.581 11:35:04 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:33.581 11:35:04 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:33.581 11:35:04 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:33.581 11:35:04 -- common/autotest_common.sh@651 -- # es=1 00:11:33.581 11:35:04 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:33.581 11:35:04 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:33.581 11:35:04 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:33.581 11:35:04 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:33.581 11:35:04 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:33.581 11:35:04 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:33.581 [ 0]:0x2 00:11:33.581 11:35:04 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:33.581 11:35:04 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:33.581 11:35:04 -- target/ns_masking.sh@40 -- # nguid=f27131eb7bd04078bb9a1700a26625d3 00:11:33.581 11:35:04 -- target/ns_masking.sh@41 -- # [[ f27131eb7bd04078bb9a1700a26625d3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:33.581 11:35:04 -- target/ns_masking.sh@108 -- # disconnect 00:11:33.581 11:35:04 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:34.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.149 11:35:04 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.149 11:35:04 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:34.149 11:35:04 -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:34.149 11:35:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:34.149 11:35:04 -- nvmf/common.sh@117 -- # sync 00:11:34.149 11:35:04 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:34.149 11:35:04 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:34.149 11:35:04 -- nvmf/common.sh@120 -- # set +e 00:11:34.149 11:35:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:34.149 11:35:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:34.149 rmmod nvme_rdma 00:11:34.149 rmmod nvme_fabrics 00:11:34.149 11:35:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:34.149 11:35:04 -- nvmf/common.sh@124 -- # set -e 00:11:34.149 11:35:04 -- nvmf/common.sh@125 -- # return 0 00:11:34.149 11:35:04 -- nvmf/common.sh@478 -- # '[' -n 2970759 ']' 00:11:34.149 11:35:04 -- nvmf/common.sh@479 -- # killprocess 2970759 00:11:34.149 11:35:04 -- common/autotest_common.sh@946 -- # '[' -z 2970759 ']' 00:11:34.149 11:35:04 -- common/autotest_common.sh@950 -- # kill -0 2970759 00:11:34.149 11:35:04 -- common/autotest_common.sh@951 -- # uname 00:11:34.149 11:35:04 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:34.149 11:35:04 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2970759 00:11:34.149 11:35:04 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:34.149 11:35:04 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:34.149 11:35:04 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2970759' 00:11:34.149 killing process with pid 2970759 00:11:34.149 11:35:04 -- common/autotest_common.sh@965 -- # kill 2970759 00:11:34.149 [2024-05-15 11:35:04.910207] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:34.149 11:35:04 -- common/autotest_common.sh@970 -- # wait 2970759 00:11:34.408 [2024-05-15 11:35:04.988676] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:34.668 11:35:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:34.668 11:35:05 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:34.668 00:11:34.668 real 0m19.487s 00:11:34.668 user 0m56.635s 00:11:34.668 sys 0m6.202s 00:11:34.668 11:35:05 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:34.668 11:35:05 -- common/autotest_common.sh@10 -- # set +x 00:11:34.668 ************************************ 00:11:34.668 END TEST nvmf_ns_masking 00:11:34.668 ************************************ 00:11:34.668 11:35:05 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:34.668 11:35:05 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:11:34.668 11:35:05 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:34.668 11:35:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:34.668 11:35:05 -- common/autotest_common.sh@10 -- # set +x 00:11:34.668 ************************************ 00:11:34.668 START TEST nvmf_nvme_cli 00:11:34.668 ************************************ 00:11:34.668 11:35:05 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:11:34.927 * Looking for test storage... 00:11:34.927 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:34.927 11:35:05 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.927 11:35:05 -- nvmf/common.sh@7 -- # uname -s 00:11:34.927 11:35:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.927 11:35:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.927 11:35:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.927 11:35:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.927 11:35:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.927 11:35:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.927 11:35:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.927 11:35:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.927 11:35:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.927 11:35:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.927 11:35:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:11:34.927 11:35:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:11:34.927 11:35:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.927 11:35:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.927 11:35:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.927 11:35:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.927 11:35:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:34.927 11:35:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.927 11:35:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.927 11:35:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.927 11:35:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.927 11:35:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.927 11:35:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.927 11:35:05 -- paths/export.sh@5 -- # export PATH 00:11:34.927 11:35:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.927 11:35:05 -- nvmf/common.sh@47 -- # : 0 00:11:34.927 11:35:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:34.927 11:35:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:34.927 11:35:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.927 11:35:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.927 11:35:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.927 11:35:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:34.927 11:35:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:34.927 11:35:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:34.927 11:35:05 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:34.927 11:35:05 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:34.927 11:35:05 -- target/nvme_cli.sh@14 -- # devs=() 00:11:34.927 11:35:05 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:34.927 11:35:05 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:34.927 11:35:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.927 11:35:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:34.927 11:35:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:34.927 11:35:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:34.927 11:35:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.927 11:35:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.927 11:35:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.927 11:35:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:34.927 11:35:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:34.927 11:35:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:34.927 11:35:05 -- common/autotest_common.sh@10 -- # set +x 00:11:41.499 11:35:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:41.499 11:35:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:41.499 11:35:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:41.499 11:35:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:41.499 11:35:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:41.499 11:35:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:41.499 11:35:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:41.499 11:35:11 -- nvmf/common.sh@295 -- # net_devs=() 00:11:41.499 11:35:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:41.499 11:35:11 -- nvmf/common.sh@296 -- # e810=() 00:11:41.499 11:35:11 -- nvmf/common.sh@296 -- # local -ga e810 00:11:41.499 11:35:11 -- nvmf/common.sh@297 -- # x722=() 00:11:41.499 11:35:11 -- nvmf/common.sh@297 -- # local -ga x722 00:11:41.499 11:35:11 -- nvmf/common.sh@298 -- # mlx=() 00:11:41.499 11:35:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:41.499 11:35:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.499 11:35:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.499 11:35:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.499 11:35:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.499 11:35:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.499 11:35:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.499 11:35:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.499 11:35:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.499 11:35:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.499 11:35:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.499 11:35:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.499 11:35:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:41.499 11:35:11 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:41.499 11:35:11 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:41.499 11:35:11 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:41.499 11:35:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:41.499 11:35:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.499 11:35:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:41.499 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:41.499 11:35:11 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:41.499 11:35:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.499 11:35:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:41.499 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:41.499 11:35:11 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:41.499 11:35:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:41.499 11:35:11 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.499 11:35:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.499 11:35:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:41.499 11:35:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.499 11:35:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:41.499 Found net devices under 0000:18:00.0: mlx_0_0 00:11:41.499 11:35:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.499 11:35:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.499 11:35:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.499 11:35:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:41.499 11:35:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.499 11:35:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:41.499 Found net devices under 0000:18:00.1: mlx_0_1 00:11:41.499 11:35:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.499 11:35:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:41.499 11:35:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:41.499 11:35:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:41.499 11:35:11 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:41.499 11:35:11 -- nvmf/common.sh@58 -- # uname 00:11:41.499 11:35:11 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:41.499 11:35:11 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:41.499 11:35:11 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:41.499 11:35:11 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:41.499 11:35:11 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:41.499 11:35:11 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:41.499 11:35:11 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:41.499 11:35:11 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:41.499 11:35:11 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:41.499 11:35:11 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:41.499 11:35:11 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:41.499 11:35:11 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:41.499 11:35:11 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:41.499 11:35:11 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:41.499 11:35:11 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:41.499 11:35:11 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:41.499 11:35:11 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:41.499 11:35:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.499 11:35:11 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:41.499 11:35:11 -- nvmf/common.sh@105 -- # continue 2 00:11:41.499 11:35:11 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:41.499 11:35:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.499 11:35:11 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.499 11:35:11 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:41.499 11:35:11 -- nvmf/common.sh@105 -- # continue 2 00:11:41.499 11:35:11 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:41.499 11:35:11 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:41.499 11:35:11 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:41.499 11:35:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:41.499 11:35:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:41.499 11:35:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:41.499 11:35:11 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:41.499 11:35:11 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:41.499 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:41.499 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:11:41.499 altname enp24s0f0np0 00:11:41.499 altname ens785f0np0 00:11:41.499 inet 192.168.100.8/24 scope global mlx_0_0 00:11:41.499 valid_lft forever preferred_lft forever 00:11:41.499 11:35:11 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:41.499 11:35:11 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:41.499 11:35:11 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:41.499 11:35:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:41.499 11:35:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:41.499 11:35:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:41.499 11:35:11 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:41.499 11:35:11 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:41.499 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:41.499 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:11:41.499 altname enp24s0f1np1 00:11:41.499 altname ens785f1np1 00:11:41.499 inet 192.168.100.9/24 scope global mlx_0_1 00:11:41.499 valid_lft forever preferred_lft forever 00:11:41.499 11:35:11 -- nvmf/common.sh@411 -- # return 0 00:11:41.499 11:35:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:41.499 11:35:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:41.499 11:35:11 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:41.499 11:35:11 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:41.499 11:35:11 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:41.499 11:35:11 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:41.499 11:35:11 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:41.499 11:35:11 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:41.499 11:35:11 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:41.499 11:35:11 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:41.499 11:35:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.499 11:35:11 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:41.499 11:35:11 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:41.499 11:35:11 -- nvmf/common.sh@105 -- # continue 2 00:11:41.500 11:35:11 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:41.500 11:35:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.500 11:35:11 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:41.500 11:35:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.500 11:35:11 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:41.500 11:35:11 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:41.500 11:35:11 -- nvmf/common.sh@105 -- # continue 2 00:11:41.500 11:35:11 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:41.500 11:35:11 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:41.500 11:35:11 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:41.500 11:35:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:41.500 11:35:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:41.500 11:35:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:41.500 11:35:11 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:41.500 11:35:11 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:41.500 11:35:11 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:41.500 11:35:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:41.500 11:35:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:41.500 11:35:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:41.500 11:35:11 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:41.500 192.168.100.9' 00:11:41.500 11:35:11 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:41.500 192.168.100.9' 00:11:41.500 11:35:11 -- nvmf/common.sh@446 -- # head -n 1 00:11:41.500 11:35:11 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:41.500 11:35:11 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:41.500 192.168.100.9' 00:11:41.500 11:35:11 -- nvmf/common.sh@447 -- # tail -n +2 00:11:41.500 11:35:11 -- nvmf/common.sh@447 -- # head -n 1 00:11:41.500 11:35:11 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:41.500 11:35:11 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:41.500 11:35:11 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:41.500 11:35:11 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:41.500 11:35:11 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:41.500 11:35:11 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:41.500 11:35:11 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:41.500 11:35:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:41.500 11:35:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:41.500 11:35:11 -- common/autotest_common.sh@10 -- # set +x 00:11:41.500 11:35:11 -- nvmf/common.sh@470 -- # nvmfpid=2975457 00:11:41.500 11:35:11 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:41.500 11:35:11 -- nvmf/common.sh@471 -- # waitforlisten 2975457 00:11:41.500 11:35:11 -- common/autotest_common.sh@827 -- # '[' -z 2975457 ']' 00:11:41.500 11:35:11 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.500 11:35:11 -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:41.500 11:35:11 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.500 11:35:11 -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:41.500 11:35:11 -- common/autotest_common.sh@10 -- # set +x 00:11:41.500 [2024-05-15 11:35:11.481381] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:11:41.500 [2024-05-15 11:35:11.481441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.500 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.500 [2024-05-15 11:35:11.554240] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.500 [2024-05-15 11:35:11.643427] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.500 [2024-05-15 11:35:11.643473] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.500 [2024-05-15 11:35:11.643482] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.500 [2024-05-15 11:35:11.643491] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.500 [2024-05-15 11:35:11.643498] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.500 [2024-05-15 11:35:11.643559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.500 [2024-05-15 11:35:11.643647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.500 [2024-05-15 11:35:11.643712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.500 [2024-05-15 11:35:11.643713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.758 11:35:12 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:41.758 11:35:12 -- common/autotest_common.sh@860 -- # return 0 00:11:41.758 11:35:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:41.758 11:35:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:41.758 11:35:12 -- common/autotest_common.sh@10 -- # set +x 00:11:41.759 11:35:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.759 11:35:12 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:41.759 11:35:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.759 11:35:12 -- common/autotest_common.sh@10 -- # set +x 00:11:41.759 [2024-05-15 11:35:12.372908] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1465f00/0x146a3f0) succeed. 00:11:41.759 [2024-05-15 11:35:12.383476] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1467540/0x14aba80) succeed. 00:11:41.759 11:35:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.759 11:35:12 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:41.759 11:35:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.759 11:35:12 -- common/autotest_common.sh@10 -- # set +x 00:11:42.018 Malloc0 00:11:42.018 11:35:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.018 11:35:12 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:42.018 11:35:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.018 11:35:12 -- common/autotest_common.sh@10 -- # set +x 00:11:42.018 Malloc1 00:11:42.018 11:35:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.018 11:35:12 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:42.018 11:35:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.018 11:35:12 -- common/autotest_common.sh@10 -- # set +x 00:11:42.018 11:35:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.018 11:35:12 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:42.018 11:35:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.018 11:35:12 -- common/autotest_common.sh@10 -- # set +x 00:11:42.018 11:35:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.018 11:35:12 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:42.018 11:35:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.018 11:35:12 -- common/autotest_common.sh@10 -- # set +x 00:11:42.018 11:35:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.018 11:35:12 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:42.018 11:35:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.018 11:35:12 -- common/autotest_common.sh@10 -- # set +x 00:11:42.018 [2024-05-15 11:35:12.593243] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:42.018 [2024-05-15 11:35:12.593640] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:42.018 11:35:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.018 11:35:12 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:42.018 11:35:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.018 11:35:12 -- common/autotest_common.sh@10 -- # set +x 00:11:42.018 11:35:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.018 11:35:12 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:11:42.018 00:11:42.018 Discovery Log Number of Records 2, Generation counter 2 00:11:42.018 =====Discovery Log Entry 0====== 00:11:42.018 trtype: rdma 00:11:42.018 adrfam: ipv4 00:11:42.018 subtype: current discovery subsystem 00:11:42.018 treq: not required 00:11:42.018 portid: 0 00:11:42.018 trsvcid: 4420 00:11:42.018 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:42.018 traddr: 192.168.100.8 00:11:42.018 eflags: explicit discovery connections, duplicate discovery information 00:11:42.018 rdma_prtype: not specified 00:11:42.018 rdma_qptype: connected 00:11:42.018 rdma_cms: rdma-cm 00:11:42.018 rdma_pkey: 0x0000 00:11:42.018 =====Discovery Log Entry 1====== 00:11:42.018 trtype: rdma 00:11:42.018 adrfam: ipv4 00:11:42.018 subtype: nvme subsystem 00:11:42.018 treq: not required 00:11:42.018 portid: 0 00:11:42.018 trsvcid: 4420 00:11:42.018 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:42.018 traddr: 192.168.100.8 00:11:42.018 eflags: none 00:11:42.018 rdma_prtype: not specified 00:11:42.018 rdma_qptype: connected 00:11:42.018 rdma_cms: rdma-cm 00:11:42.018 rdma_pkey: 0x0000 00:11:42.018 11:35:12 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:42.018 11:35:12 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:42.018 11:35:12 -- nvmf/common.sh@511 -- # local dev _ 00:11:42.018 11:35:12 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:42.018 11:35:12 -- nvmf/common.sh@510 -- # nvme list 00:11:42.018 11:35:12 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:42.018 11:35:12 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:42.018 11:35:12 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:42.018 11:35:12 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:42.018 11:35:12 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:42.018 11:35:12 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:42.954 11:35:13 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:42.954 11:35:13 -- common/autotest_common.sh@1194 -- # local i=0 00:11:42.954 11:35:13 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.954 11:35:13 -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:11:42.954 11:35:13 -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:11:42.954 11:35:13 -- common/autotest_common.sh@1201 -- # sleep 2 00:11:45.484 11:35:15 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:45.484 11:35:15 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:45.484 11:35:15 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.484 11:35:15 -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:11:45.484 11:35:15 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.484 11:35:15 -- common/autotest_common.sh@1204 -- # return 0 00:11:45.484 11:35:15 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:45.484 11:35:15 -- nvmf/common.sh@511 -- # local dev _ 00:11:45.484 11:35:15 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:45.484 11:35:15 -- nvmf/common.sh@510 -- # nvme list 00:11:45.484 11:35:15 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:45.484 11:35:15 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:45.484 11:35:15 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:45.484 11:35:15 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:45.484 11:35:15 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:45.484 11:35:15 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:11:45.484 11:35:15 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:45.484 11:35:15 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:45.484 11:35:15 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:11:45.484 11:35:15 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:45.484 11:35:15 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:45.484 /dev/nvme0n1 ]] 00:11:45.484 11:35:15 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:45.484 11:35:15 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:45.484 11:35:15 -- nvmf/common.sh@511 -- # local dev _ 00:11:45.484 11:35:15 -- nvmf/common.sh@510 -- # nvme list 00:11:45.484 11:35:15 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:45.484 11:35:15 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:45.484 11:35:15 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:45.484 11:35:15 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:45.484 11:35:15 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:45.484 11:35:15 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:45.484 11:35:15 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:11:45.484 11:35:15 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:45.484 11:35:15 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:45.484 11:35:15 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:11:45.484 11:35:15 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:45.484 11:35:15 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:45.484 11:35:15 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.050 11:35:16 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.050 11:35:16 -- common/autotest_common.sh@1215 -- # local i=0 00:11:46.050 11:35:16 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:46.050 11:35:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.050 11:35:16 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:46.050 11:35:16 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.050 11:35:16 -- common/autotest_common.sh@1227 -- # return 0 00:11:46.050 11:35:16 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:46.050 11:35:16 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.050 11:35:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.050 11:35:16 -- common/autotest_common.sh@10 -- # set +x 00:11:46.050 11:35:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.050 11:35:16 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:46.050 11:35:16 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:46.050 11:35:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:46.050 11:35:16 -- nvmf/common.sh@117 -- # sync 00:11:46.050 11:35:16 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:46.050 11:35:16 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:46.050 11:35:16 -- nvmf/common.sh@120 -- # set +e 00:11:46.050 11:35:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:46.050 11:35:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:46.050 rmmod nvme_rdma 00:11:46.050 rmmod nvme_fabrics 00:11:46.309 11:35:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:46.309 11:35:16 -- nvmf/common.sh@124 -- # set -e 00:11:46.309 11:35:16 -- nvmf/common.sh@125 -- # return 0 00:11:46.309 11:35:16 -- nvmf/common.sh@478 -- # '[' -n 2975457 ']' 00:11:46.309 11:35:16 -- nvmf/common.sh@479 -- # killprocess 2975457 00:11:46.309 11:35:16 -- common/autotest_common.sh@946 -- # '[' -z 2975457 ']' 00:11:46.309 11:35:16 -- common/autotest_common.sh@950 -- # kill -0 2975457 00:11:46.309 11:35:16 -- common/autotest_common.sh@951 -- # uname 00:11:46.309 11:35:16 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:46.309 11:35:16 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2975457 00:11:46.309 11:35:16 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:46.309 11:35:16 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:46.309 11:35:16 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2975457' 00:11:46.309 killing process with pid 2975457 00:11:46.309 11:35:16 -- common/autotest_common.sh@965 -- # kill 2975457 00:11:46.309 [2024-05-15 11:35:16.894227] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:46.309 11:35:16 -- common/autotest_common.sh@970 -- # wait 2975457 00:11:46.309 [2024-05-15 11:35:16.986313] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:46.567 11:35:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:46.567 11:35:17 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:46.567 00:11:46.567 real 0m11.935s 00:11:46.567 user 0m23.924s 00:11:46.567 sys 0m5.198s 00:11:46.567 11:35:17 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:46.567 11:35:17 -- common/autotest_common.sh@10 -- # set +x 00:11:46.567 ************************************ 00:11:46.567 END TEST nvmf_nvme_cli 00:11:46.567 ************************************ 00:11:46.567 11:35:17 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:11:46.567 11:35:17 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:11:46.567 11:35:17 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:46.567 11:35:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:46.567 11:35:17 -- common/autotest_common.sh@10 -- # set +x 00:11:46.827 ************************************ 00:11:46.827 START TEST nvmf_host_management 00:11:46.827 ************************************ 00:11:46.827 11:35:17 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:11:46.827 * Looking for test storage... 00:11:46.827 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:46.827 11:35:17 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.827 11:35:17 -- nvmf/common.sh@7 -- # uname -s 00:11:46.827 11:35:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.827 11:35:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.827 11:35:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.827 11:35:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.827 11:35:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.827 11:35:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.827 11:35:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.827 11:35:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.827 11:35:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.827 11:35:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.827 11:35:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:11:46.827 11:35:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:11:46.827 11:35:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.827 11:35:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.827 11:35:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.827 11:35:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.827 11:35:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:46.827 11:35:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.827 11:35:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.827 11:35:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.827 11:35:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.827 11:35:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.827 11:35:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.827 11:35:17 -- paths/export.sh@5 -- # export PATH 00:11:46.827 11:35:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.827 11:35:17 -- nvmf/common.sh@47 -- # : 0 00:11:46.827 11:35:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:46.827 11:35:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:46.827 11:35:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.827 11:35:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.827 11:35:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.827 11:35:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:46.827 11:35:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:46.827 11:35:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:46.827 11:35:17 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:46.827 11:35:17 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:46.827 11:35:17 -- target/host_management.sh@105 -- # nvmftestinit 00:11:46.827 11:35:17 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:46.827 11:35:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.827 11:35:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:46.827 11:35:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:46.827 11:35:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:46.827 11:35:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.827 11:35:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.827 11:35:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.827 11:35:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:46.827 11:35:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:46.827 11:35:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:46.827 11:35:17 -- common/autotest_common.sh@10 -- # set +x 00:11:52.097 11:35:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:52.097 11:35:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:52.097 11:35:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:52.097 11:35:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:52.356 11:35:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:52.356 11:35:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:52.356 11:35:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:52.356 11:35:22 -- nvmf/common.sh@295 -- # net_devs=() 00:11:52.356 11:35:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:52.356 11:35:22 -- nvmf/common.sh@296 -- # e810=() 00:11:52.356 11:35:22 -- nvmf/common.sh@296 -- # local -ga e810 00:11:52.356 11:35:22 -- nvmf/common.sh@297 -- # x722=() 00:11:52.356 11:35:22 -- nvmf/common.sh@297 -- # local -ga x722 00:11:52.356 11:35:22 -- nvmf/common.sh@298 -- # mlx=() 00:11:52.356 11:35:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:52.356 11:35:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.356 11:35:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.356 11:35:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.356 11:35:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.356 11:35:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.356 11:35:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.356 11:35:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.356 11:35:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.356 11:35:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.356 11:35:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.356 11:35:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.356 11:35:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:52.356 11:35:22 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:52.356 11:35:22 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:52.356 11:35:22 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:52.356 11:35:22 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:52.356 11:35:22 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:52.356 11:35:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:52.356 11:35:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.356 11:35:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:52.356 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:52.356 11:35:22 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:52.356 11:35:22 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:52.356 11:35:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:52.356 11:35:22 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:52.356 11:35:22 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:52.356 11:35:22 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:52.356 11:35:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.356 11:35:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:52.356 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:52.356 11:35:22 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:52.356 11:35:22 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:52.356 11:35:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:52.356 11:35:22 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:52.356 11:35:22 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:52.356 11:35:22 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:52.356 11:35:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:52.356 11:35:22 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:52.356 11:35:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.356 11:35:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.356 11:35:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:52.356 11:35:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.356 11:35:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:52.356 Found net devices under 0000:18:00.0: mlx_0_0 00:11:52.356 11:35:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.356 11:35:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.356 11:35:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.356 11:35:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:52.356 11:35:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.356 11:35:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:52.356 Found net devices under 0000:18:00.1: mlx_0_1 00:11:52.356 11:35:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.356 11:35:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:52.357 11:35:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:52.357 11:35:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:52.357 11:35:22 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:52.357 11:35:22 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:52.357 11:35:22 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:52.357 11:35:22 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:52.357 11:35:22 -- nvmf/common.sh@58 -- # uname 00:11:52.357 11:35:22 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:52.357 11:35:22 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:52.357 11:35:22 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:52.357 11:35:22 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:52.357 11:35:22 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:52.357 11:35:22 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:52.357 11:35:22 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:52.357 11:35:22 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:52.357 11:35:22 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:52.357 11:35:22 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:52.357 11:35:22 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:52.357 11:35:22 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:52.357 11:35:22 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:52.357 11:35:22 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:52.357 11:35:22 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:52.357 11:35:22 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:52.357 11:35:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:52.357 11:35:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:52.357 11:35:22 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:52.357 11:35:22 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:52.357 11:35:22 -- nvmf/common.sh@105 -- # continue 2 00:11:52.357 11:35:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:52.357 11:35:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:52.357 11:35:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:52.357 11:35:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:52.357 11:35:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:52.357 11:35:22 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:52.357 11:35:22 -- nvmf/common.sh@105 -- # continue 2 00:11:52.357 11:35:22 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:52.357 11:35:22 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:52.357 11:35:22 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:52.357 11:35:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:52.357 11:35:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:52.357 11:35:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:52.357 11:35:22 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:52.357 11:35:22 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:52.357 11:35:22 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:52.357 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:52.357 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:11:52.357 altname enp24s0f0np0 00:11:52.357 altname ens785f0np0 00:11:52.357 inet 192.168.100.8/24 scope global mlx_0_0 00:11:52.357 valid_lft forever preferred_lft forever 00:11:52.357 11:35:22 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:52.357 11:35:22 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:52.357 11:35:22 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:52.357 11:35:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:52.357 11:35:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:52.357 11:35:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:52.357 11:35:22 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:52.357 11:35:22 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:52.357 11:35:22 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:52.357 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:52.357 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:11:52.357 altname enp24s0f1np1 00:11:52.357 altname ens785f1np1 00:11:52.357 inet 192.168.100.9/24 scope global mlx_0_1 00:11:52.357 valid_lft forever preferred_lft forever 00:11:52.357 11:35:22 -- nvmf/common.sh@411 -- # return 0 00:11:52.357 11:35:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:52.357 11:35:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:52.357 11:35:22 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:52.357 11:35:22 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:52.357 11:35:22 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:52.357 11:35:22 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:52.357 11:35:22 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:52.357 11:35:22 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:52.357 11:35:22 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:52.357 11:35:23 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:52.357 11:35:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:52.357 11:35:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:52.357 11:35:23 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:52.357 11:35:23 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:52.357 11:35:23 -- nvmf/common.sh@105 -- # continue 2 00:11:52.357 11:35:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:52.357 11:35:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:52.357 11:35:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:52.357 11:35:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:52.357 11:35:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:52.357 11:35:23 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:52.357 11:35:23 -- nvmf/common.sh@105 -- # continue 2 00:11:52.357 11:35:23 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:52.357 11:35:23 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:52.357 11:35:23 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:52.357 11:35:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:52.357 11:35:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:52.357 11:35:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:52.357 11:35:23 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:52.357 11:35:23 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:52.357 11:35:23 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:52.357 11:35:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:52.357 11:35:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:52.357 11:35:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:52.357 11:35:23 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:52.357 192.168.100.9' 00:11:52.357 11:35:23 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:52.357 192.168.100.9' 00:11:52.357 11:35:23 -- nvmf/common.sh@446 -- # head -n 1 00:11:52.357 11:35:23 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:52.357 11:35:23 -- nvmf/common.sh@447 -- # tail -n +2 00:11:52.357 11:35:23 -- nvmf/common.sh@447 -- # head -n 1 00:11:52.357 11:35:23 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:52.357 192.168.100.9' 00:11:52.357 11:35:23 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:52.357 11:35:23 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:52.357 11:35:23 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:52.357 11:35:23 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:52.357 11:35:23 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:52.357 11:35:23 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:52.357 11:35:23 -- target/host_management.sh@107 -- # nvmf_host_management 00:11:52.357 11:35:23 -- target/host_management.sh@69 -- # starttarget 00:11:52.357 11:35:23 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:52.357 11:35:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:52.357 11:35:23 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:52.357 11:35:23 -- common/autotest_common.sh@10 -- # set +x 00:11:52.357 11:35:23 -- nvmf/common.sh@470 -- # nvmfpid=2979061 00:11:52.357 11:35:23 -- nvmf/common.sh@471 -- # waitforlisten 2979061 00:11:52.357 11:35:23 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:52.357 11:35:23 -- common/autotest_common.sh@827 -- # '[' -z 2979061 ']' 00:11:52.357 11:35:23 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.357 11:35:23 -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:52.357 11:35:23 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.357 11:35:23 -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:52.357 11:35:23 -- common/autotest_common.sh@10 -- # set +x 00:11:52.621 [2024-05-15 11:35:23.141036] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:11:52.621 [2024-05-15 11:35:23.141105] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.621 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.621 [2024-05-15 11:35:23.215892] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.621 [2024-05-15 11:35:23.297506] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.621 [2024-05-15 11:35:23.297552] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.621 [2024-05-15 11:35:23.297562] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.621 [2024-05-15 11:35:23.297570] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.621 [2024-05-15 11:35:23.297576] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.621 [2024-05-15 11:35:23.297683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.621 [2024-05-15 11:35:23.297766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.621 [2024-05-15 11:35:23.297869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.621 [2024-05-15 11:35:23.297870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:53.561 11:35:23 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:53.561 11:35:23 -- common/autotest_common.sh@860 -- # return 0 00:11:53.561 11:35:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:53.561 11:35:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.561 11:35:23 -- common/autotest_common.sh@10 -- # set +x 00:11:53.561 11:35:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.561 11:35:24 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:53.561 11:35:24 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.561 11:35:24 -- common/autotest_common.sh@10 -- # set +x 00:11:53.561 [2024-05-15 11:35:24.046604] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11541f0/0x11586e0) succeed. 00:11:53.561 [2024-05-15 11:35:24.057188] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1155830/0x1199d70) succeed. 00:11:53.561 11:35:24 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.561 11:35:24 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:53.561 11:35:24 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:53.561 11:35:24 -- common/autotest_common.sh@10 -- # set +x 00:11:53.561 11:35:24 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:53.561 11:35:24 -- target/host_management.sh@23 -- # cat 00:11:53.561 11:35:24 -- target/host_management.sh@30 -- # rpc_cmd 00:11:53.561 11:35:24 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.561 11:35:24 -- common/autotest_common.sh@10 -- # set +x 00:11:53.561 Malloc0 00:11:53.561 [2024-05-15 11:35:24.248695] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:53.561 [2024-05-15 11:35:24.249085] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:53.561 11:35:24 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.561 11:35:24 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:53.561 11:35:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.561 11:35:24 -- common/autotest_common.sh@10 -- # set +x 00:11:53.561 11:35:24 -- target/host_management.sh@73 -- # perfpid=2979220 00:11:53.561 11:35:24 -- target/host_management.sh@74 -- # waitforlisten 2979220 /var/tmp/bdevperf.sock 00:11:53.561 11:35:24 -- common/autotest_common.sh@827 -- # '[' -z 2979220 ']' 00:11:53.561 11:35:24 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:53.561 11:35:24 -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:53.561 11:35:24 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:53.561 11:35:24 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:53.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:53.561 11:35:24 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:53.561 11:35:24 -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:53.561 11:35:24 -- common/autotest_common.sh@10 -- # set +x 00:11:53.561 11:35:24 -- nvmf/common.sh@521 -- # config=() 00:11:53.561 11:35:24 -- nvmf/common.sh@521 -- # local subsystem config 00:11:53.561 11:35:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:53.561 11:35:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:53.561 { 00:11:53.561 "params": { 00:11:53.561 "name": "Nvme$subsystem", 00:11:53.561 "trtype": "$TEST_TRANSPORT", 00:11:53.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:53.561 "adrfam": "ipv4", 00:11:53.561 "trsvcid": "$NVMF_PORT", 00:11:53.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:53.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:53.561 "hdgst": ${hdgst:-false}, 00:11:53.561 "ddgst": ${ddgst:-false} 00:11:53.561 }, 00:11:53.561 "method": "bdev_nvme_attach_controller" 00:11:53.561 } 00:11:53.561 EOF 00:11:53.561 )") 00:11:53.561 11:35:24 -- nvmf/common.sh@543 -- # cat 00:11:53.561 11:35:24 -- nvmf/common.sh@545 -- # jq . 00:11:53.820 11:35:24 -- nvmf/common.sh@546 -- # IFS=, 00:11:53.820 11:35:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:53.820 "params": { 00:11:53.820 "name": "Nvme0", 00:11:53.820 "trtype": "rdma", 00:11:53.820 "traddr": "192.168.100.8", 00:11:53.820 "adrfam": "ipv4", 00:11:53.820 "trsvcid": "4420", 00:11:53.820 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:53.820 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:53.820 "hdgst": false, 00:11:53.820 "ddgst": false 00:11:53.820 }, 00:11:53.820 "method": "bdev_nvme_attach_controller" 00:11:53.820 }' 00:11:53.820 [2024-05-15 11:35:24.355133] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:11:53.820 [2024-05-15 11:35:24.355199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2979220 ] 00:11:53.820 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.820 [2024-05-15 11:35:24.430350] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.820 [2024-05-15 11:35:24.512637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.079 Running I/O for 10 seconds... 00:11:54.647 11:35:25 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:54.647 11:35:25 -- common/autotest_common.sh@860 -- # return 0 00:11:54.647 11:35:25 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:54.647 11:35:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.647 11:35:25 -- common/autotest_common.sh@10 -- # set +x 00:11:54.647 11:35:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.647 11:35:25 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:54.647 11:35:25 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:54.647 11:35:25 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:54.647 11:35:25 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:54.647 11:35:25 -- target/host_management.sh@52 -- # local ret=1 00:11:54.647 11:35:25 -- target/host_management.sh@53 -- # local i 00:11:54.647 11:35:25 -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:54.647 11:35:25 -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:54.647 11:35:25 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:54.647 11:35:25 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:54.647 11:35:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.647 11:35:25 -- common/autotest_common.sh@10 -- # set +x 00:11:54.647 11:35:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.647 11:35:25 -- target/host_management.sh@55 -- # read_io_count=1516 00:11:54.647 11:35:25 -- target/host_management.sh@58 -- # '[' 1516 -ge 100 ']' 00:11:54.647 11:35:25 -- target/host_management.sh@59 -- # ret=0 00:11:54.647 11:35:25 -- target/host_management.sh@60 -- # break 00:11:54.647 11:35:25 -- target/host_management.sh@64 -- # return 0 00:11:54.647 11:35:25 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:54.647 11:35:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.647 11:35:25 -- common/autotest_common.sh@10 -- # set +x 00:11:54.647 11:35:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.647 11:35:25 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:54.647 11:35:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.647 11:35:25 -- common/autotest_common.sh@10 -- # set +x 00:11:54.647 11:35:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.647 11:35:25 -- target/host_management.sh@87 -- # sleep 1 00:11:55.586 [2024-05-15 11:35:26.266193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.586 [2024-05-15 11:35:26.266232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.266245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.586 [2024-05-15 11:35:26.266255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.266265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.586 [2024-05-15 11:35:26.266274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.266284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.586 [2024-05-15 11:35:26.266294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.268635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:11:55.586 [2024-05-15 11:35:26.268654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:11:55.586 [2024-05-15 11:35:26.268681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182500 00:11:55.586 [2024-05-15 11:35:26.268692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.268718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182500 00:11:55.586 [2024-05-15 11:35:26.268729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.268743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182500 00:11:55.586 [2024-05-15 11:35:26.268754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.268767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182500 00:11:55.586 [2024-05-15 11:35:26.268777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.268794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182500 00:11:55.586 [2024-05-15 11:35:26.268804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.268817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182500 00:11:55.586 [2024-05-15 11:35:26.268827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.268840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182500 00:11:55.586 [2024-05-15 11:35:26.268850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.268863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182500 00:11:55.586 [2024-05-15 11:35:26.268874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.268887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182500 00:11:55.586 [2024-05-15 11:35:26.268897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.268910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182500 00:11:55.586 [2024-05-15 11:35:26.268920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.268933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182500 00:11:55.586 [2024-05-15 11:35:26.268943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.268956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182500 00:11:55.586 [2024-05-15 11:35:26.268966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.268979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182500 00:11:55.586 [2024-05-15 11:35:26.268989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.269002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182500 00:11:55.586 [2024-05-15 11:35:26.269012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.269024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182800 00:11:55.586 [2024-05-15 11:35:26.269034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.586 [2024-05-15 11:35:26.269050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182800 00:11:55.586 [2024-05-15 11:35:26.269064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182800 00:11:55.587 [2024-05-15 11:35:26.269087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182800 00:11:55.587 [2024-05-15 11:35:26.269113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182800 00:11:55.587 [2024-05-15 11:35:26.269137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182800 00:11:55.587 [2024-05-15 11:35:26.269160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182800 00:11:55.587 [2024-05-15 11:35:26.269183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182800 00:11:55.587 [2024-05-15 11:35:26.269206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182800 00:11:55.587 [2024-05-15 11:35:26.269228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182800 00:11:55.587 [2024-05-15 11:35:26.269251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182800 00:11:55.587 [2024-05-15 11:35:26.269274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182800 00:11:55.587 [2024-05-15 11:35:26.269297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182800 00:11:55.587 [2024-05-15 11:35:26.269322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182800 00:11:55.587 [2024-05-15 11:35:26.269345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182700 00:11:55.587 [2024-05-15 11:35:26.269369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182700 00:11:55.587 [2024-05-15 11:35:26.269393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182700 00:11:55.587 [2024-05-15 11:35:26.269416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182700 00:11:55.587 [2024-05-15 11:35:26.269438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182700 00:11:55.587 [2024-05-15 11:35:26.269462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182700 00:11:55.587 [2024-05-15 11:35:26.269487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182700 00:11:55.587 [2024-05-15 11:35:26.269512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182700 00:11:55.587 [2024-05-15 11:35:26.269534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182700 00:11:55.587 [2024-05-15 11:35:26.269557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182700 00:11:55.587 [2024-05-15 11:35:26.269583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182700 00:11:55.587 [2024-05-15 11:35:26.269605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182700 00:11:55.587 [2024-05-15 11:35:26.269628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182700 00:11:55.587 [2024-05-15 11:35:26.269651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4be000 len:0x10000 key:0x182400 00:11:55.587 [2024-05-15 11:35:26.269674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4df000 len:0x10000 key:0x182400 00:11:55.587 [2024-05-15 11:35:26.269699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce8e000 len:0x10000 key:0x182400 00:11:55.587 [2024-05-15 11:35:26.269722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceaf000 len:0x10000 key:0x182400 00:11:55.587 [2024-05-15 11:35:26.269745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c85e000 len:0x10000 key:0x182400 00:11:55.587 [2024-05-15 11:35:26.269768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c87f000 len:0x10000 key:0x182400 00:11:55.587 [2024-05-15 11:35:26.269791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c22e000 len:0x10000 key:0x182400 00:11:55.587 [2024-05-15 11:35:26.269815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c24f000 len:0x10000 key:0x182400 00:11:55.587 [2024-05-15 11:35:26.269839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6f7000 len:0x10000 key:0x182400 00:11:55.587 [2024-05-15 11:35:26.269863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8ff000 len:0x10000 key:0x182400 00:11:55.587 [2024-05-15 11:35:26.269887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8de000 len:0x10000 key:0x182400 00:11:55.587 [2024-05-15 11:35:26.269909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.587 [2024-05-15 11:35:26.269923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8bd000 len:0x10000 key:0x182400 00:11:55.587 [2024-05-15 11:35:26.269933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.588 [2024-05-15 11:35:26.269946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d89c000 len:0x10000 key:0x182400 00:11:55.588 [2024-05-15 11:35:26.269956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.588 [2024-05-15 11:35:26.269970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d87b000 len:0x10000 key:0x182400 00:11:55.588 [2024-05-15 11:35:26.269979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.588 [2024-05-15 11:35:26.269993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d85a000 len:0x10000 key:0x182400 00:11:55.588 [2024-05-15 11:35:26.270003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.588 [2024-05-15 11:35:26.270016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d839000 len:0x10000 key:0x182400 00:11:55.588 [2024-05-15 11:35:26.270026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.588 [2024-05-15 11:35:26.270039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d7d6000 len:0x10000 key:0x182400 00:11:55.588 [2024-05-15 11:35:26.270049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.588 [2024-05-15 11:35:26.270067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d7b5000 len:0x10000 key:0x182400 00:11:55.588 [2024-05-15 11:35:26.270076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.588 [2024-05-15 11:35:26.270090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d794000 len:0x10000 key:0x182400 00:11:55.588 [2024-05-15 11:35:26.270104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.588 [2024-05-15 11:35:26.270117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d773000 len:0x10000 key:0x182400 00:11:55.588 [2024-05-15 11:35:26.270128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.588 [2024-05-15 11:35:26.270141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d752000 len:0x10000 key:0x182400 00:11:55.588 [2024-05-15 11:35:26.270150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.588 [2024-05-15 11:35:26.270163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d731000 len:0x10000 key:0x182400 00:11:55.588 [2024-05-15 11:35:26.270173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.588 [2024-05-15 11:35:26.270187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d710000 len:0x10000 key:0x182400 00:11:55.588 [2024-05-15 11:35:26.270197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:5843 cdw0:192ed040 sqhd:4500 p:1 m:0 dnr:0 00:11:55.588 11:35:26 -- target/host_management.sh@91 -- # kill -9 2979220 00:11:55.588 11:35:26 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:55.588 11:35:26 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:55.588 11:35:26 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:55.588 11:35:26 -- nvmf/common.sh@521 -- # config=() 00:11:55.588 11:35:26 -- nvmf/common.sh@521 -- # local subsystem config 00:11:55.588 11:35:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:55.588 11:35:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:55.588 { 00:11:55.588 "params": { 00:11:55.588 "name": "Nvme$subsystem", 00:11:55.588 "trtype": "$TEST_TRANSPORT", 00:11:55.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:55.588 "adrfam": "ipv4", 00:11:55.588 "trsvcid": "$NVMF_PORT", 00:11:55.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:55.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:55.588 "hdgst": ${hdgst:-false}, 00:11:55.588 "ddgst": ${ddgst:-false} 00:11:55.588 }, 00:11:55.588 "method": "bdev_nvme_attach_controller" 00:11:55.588 } 00:11:55.588 EOF 00:11:55.588 )") 00:11:55.588 11:35:26 -- nvmf/common.sh@543 -- # cat 00:11:55.588 11:35:26 -- nvmf/common.sh@545 -- # jq . 00:11:55.588 11:35:26 -- nvmf/common.sh@546 -- # IFS=, 00:11:55.588 11:35:26 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:55.588 "params": { 00:11:55.588 "name": "Nvme0", 00:11:55.588 "trtype": "rdma", 00:11:55.588 "traddr": "192.168.100.8", 00:11:55.588 "adrfam": "ipv4", 00:11:55.588 "trsvcid": "4420", 00:11:55.588 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:55.588 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:55.588 "hdgst": false, 00:11:55.588 "ddgst": false 00:11:55.588 }, 00:11:55.588 "method": "bdev_nvme_attach_controller" 00:11:55.588 }' 00:11:55.588 [2024-05-15 11:35:26.316758] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:11:55.588 [2024-05-15 11:35:26.316819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2979546 ] 00:11:55.847 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.848 [2024-05-15 11:35:26.394168] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.848 [2024-05-15 11:35:26.475675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.107 Running I/O for 1 seconds... 00:11:57.044 00:11:57.044 Latency(us) 00:11:57.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:57.044 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:57.044 Verification LBA range: start 0x0 length 0x400 00:11:57.044 Nvme0n1 : 1.01 3024.87 189.05 0.00 0.00 20725.01 698.10 43082.80 00:11:57.044 =================================================================================================================== 00:11:57.044 Total : 3024.87 189.05 0.00 0.00 20725.01 698.10 43082.80 00:11:57.303 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2979220 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:11:57.303 11:35:27 -- target/host_management.sh@102 -- # stoptarget 00:11:57.303 11:35:27 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:57.303 11:35:27 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:57.303 11:35:27 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:57.303 11:35:27 -- target/host_management.sh@40 -- # nvmftestfini 00:11:57.303 11:35:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:57.303 11:35:27 -- nvmf/common.sh@117 -- # sync 00:11:57.303 11:35:27 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:57.303 11:35:27 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:57.303 11:35:27 -- nvmf/common.sh@120 -- # set +e 00:11:57.303 11:35:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:57.303 11:35:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:57.303 rmmod nvme_rdma 00:11:57.303 rmmod nvme_fabrics 00:11:57.303 11:35:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:57.303 11:35:27 -- nvmf/common.sh@124 -- # set -e 00:11:57.303 11:35:27 -- nvmf/common.sh@125 -- # return 0 00:11:57.303 11:35:27 -- nvmf/common.sh@478 -- # '[' -n 2979061 ']' 00:11:57.303 11:35:27 -- nvmf/common.sh@479 -- # killprocess 2979061 00:11:57.304 11:35:27 -- common/autotest_common.sh@946 -- # '[' -z 2979061 ']' 00:11:57.304 11:35:27 -- common/autotest_common.sh@950 -- # kill -0 2979061 00:11:57.304 11:35:27 -- common/autotest_common.sh@951 -- # uname 00:11:57.304 11:35:27 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:57.304 11:35:27 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2979061 00:11:57.304 11:35:28 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:57.304 11:35:28 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:57.304 11:35:28 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2979061' 00:11:57.304 killing process with pid 2979061 00:11:57.304 11:35:28 -- common/autotest_common.sh@965 -- # kill 2979061 00:11:57.304 [2024-05-15 11:35:28.031850] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:57.304 11:35:28 -- common/autotest_common.sh@970 -- # wait 2979061 00:11:57.562 [2024-05-15 11:35:28.113564] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:57.562 [2024-05-15 11:35:28.322966] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:57.821 11:35:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:57.821 11:35:28 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:57.821 11:35:28 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:57.821 00:11:57.821 real 0m10.986s 00:11:57.821 user 0m24.973s 00:11:57.821 sys 0m5.373s 00:11:57.821 11:35:28 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:57.821 11:35:28 -- common/autotest_common.sh@10 -- # set +x 00:11:57.821 ************************************ 00:11:57.821 END TEST nvmf_host_management 00:11:57.821 ************************************ 00:11:57.821 11:35:28 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:11:57.821 11:35:28 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:57.821 11:35:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:57.821 11:35:28 -- common/autotest_common.sh@10 -- # set +x 00:11:57.821 ************************************ 00:11:57.821 START TEST nvmf_lvol 00:11:57.821 ************************************ 00:11:57.821 11:35:28 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:11:57.821 * Looking for test storage... 00:11:57.821 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:57.821 11:35:28 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.821 11:35:28 -- nvmf/common.sh@7 -- # uname -s 00:11:57.821 11:35:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.821 11:35:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.821 11:35:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.821 11:35:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.821 11:35:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.821 11:35:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.821 11:35:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.821 11:35:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.821 11:35:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.821 11:35:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.821 11:35:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:11:57.821 11:35:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:11:57.821 11:35:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.821 11:35:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.821 11:35:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.821 11:35:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.821 11:35:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:57.821 11:35:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.821 11:35:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.821 11:35:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.821 11:35:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.821 11:35:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.821 11:35:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.821 11:35:28 -- paths/export.sh@5 -- # export PATH 00:11:57.822 11:35:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.822 11:35:28 -- nvmf/common.sh@47 -- # : 0 00:11:57.822 11:35:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:57.822 11:35:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:57.822 11:35:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.822 11:35:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.822 11:35:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.822 11:35:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:57.822 11:35:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:57.822 11:35:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:57.822 11:35:28 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:57.822 11:35:28 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:57.822 11:35:28 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:57.822 11:35:28 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:57.822 11:35:28 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:57.822 11:35:28 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:57.822 11:35:28 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:57.822 11:35:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.822 11:35:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:57.822 11:35:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:57.822 11:35:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:57.822 11:35:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.822 11:35:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:57.822 11:35:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.822 11:35:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:57.822 11:35:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:57.822 11:35:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:57.822 11:35:28 -- common/autotest_common.sh@10 -- # set +x 00:12:04.487 11:35:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:04.487 11:35:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:04.487 11:35:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:04.487 11:35:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:04.487 11:35:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:04.487 11:35:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:04.487 11:35:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:04.487 11:35:34 -- nvmf/common.sh@295 -- # net_devs=() 00:12:04.487 11:35:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:04.487 11:35:34 -- nvmf/common.sh@296 -- # e810=() 00:12:04.487 11:35:34 -- nvmf/common.sh@296 -- # local -ga e810 00:12:04.487 11:35:34 -- nvmf/common.sh@297 -- # x722=() 00:12:04.487 11:35:34 -- nvmf/common.sh@297 -- # local -ga x722 00:12:04.487 11:35:34 -- nvmf/common.sh@298 -- # mlx=() 00:12:04.487 11:35:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:04.487 11:35:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.487 11:35:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.487 11:35:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.488 11:35:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.488 11:35:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.488 11:35:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.488 11:35:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.488 11:35:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.488 11:35:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.488 11:35:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.488 11:35:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.488 11:35:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:04.488 11:35:34 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:04.488 11:35:34 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:04.488 11:35:34 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:04.488 11:35:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:04.488 11:35:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.488 11:35:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:04.488 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:04.488 11:35:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:04.488 11:35:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.488 11:35:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:04.488 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:04.488 11:35:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:04.488 11:35:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:04.488 11:35:34 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.488 11:35:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.488 11:35:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:04.488 11:35:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.488 11:35:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:04.488 Found net devices under 0000:18:00.0: mlx_0_0 00:12:04.488 11:35:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.488 11:35:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.488 11:35:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.488 11:35:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:04.488 11:35:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.488 11:35:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:04.488 Found net devices under 0000:18:00.1: mlx_0_1 00:12:04.488 11:35:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.488 11:35:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:04.488 11:35:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:04.488 11:35:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:04.488 11:35:34 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:04.488 11:35:34 -- nvmf/common.sh@58 -- # uname 00:12:04.488 11:35:34 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:04.488 11:35:34 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:04.488 11:35:34 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:04.488 11:35:34 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:04.488 11:35:34 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:04.488 11:35:34 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:04.488 11:35:34 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:04.488 11:35:34 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:04.488 11:35:34 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:04.488 11:35:34 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:04.488 11:35:34 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:04.488 11:35:34 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:04.488 11:35:34 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:04.488 11:35:34 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:04.488 11:35:34 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:04.488 11:35:34 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:04.488 11:35:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:04.488 11:35:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.488 11:35:34 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:04.488 11:35:34 -- nvmf/common.sh@105 -- # continue 2 00:12:04.488 11:35:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:04.488 11:35:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.488 11:35:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.488 11:35:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:04.488 11:35:34 -- nvmf/common.sh@105 -- # continue 2 00:12:04.488 11:35:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:04.488 11:35:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:04.488 11:35:34 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:04.488 11:35:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:04.488 11:35:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:04.488 11:35:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:04.488 11:35:34 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:04.488 11:35:34 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:04.488 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:04.488 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:12:04.488 altname enp24s0f0np0 00:12:04.488 altname ens785f0np0 00:12:04.488 inet 192.168.100.8/24 scope global mlx_0_0 00:12:04.488 valid_lft forever preferred_lft forever 00:12:04.488 11:35:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:04.488 11:35:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:04.488 11:35:34 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:04.488 11:35:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:04.488 11:35:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:04.488 11:35:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:04.488 11:35:34 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:04.488 11:35:34 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:04.488 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:04.488 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:12:04.488 altname enp24s0f1np1 00:12:04.488 altname ens785f1np1 00:12:04.488 inet 192.168.100.9/24 scope global mlx_0_1 00:12:04.488 valid_lft forever preferred_lft forever 00:12:04.488 11:35:34 -- nvmf/common.sh@411 -- # return 0 00:12:04.488 11:35:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:04.488 11:35:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:04.488 11:35:34 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:04.488 11:35:34 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:04.488 11:35:34 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:04.488 11:35:34 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:04.488 11:35:34 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:04.488 11:35:34 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:04.488 11:35:34 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:04.488 11:35:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:04.488 11:35:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.488 11:35:34 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:04.488 11:35:34 -- nvmf/common.sh@105 -- # continue 2 00:12:04.488 11:35:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:04.488 11:35:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.488 11:35:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.488 11:35:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:04.488 11:35:34 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:04.488 11:35:34 -- nvmf/common.sh@105 -- # continue 2 00:12:04.488 11:35:34 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:04.488 11:35:34 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:04.488 11:35:34 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:04.488 11:35:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:04.488 11:35:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:04.488 11:35:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:04.488 11:35:34 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:04.488 11:35:34 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:04.488 11:35:34 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:04.488 11:35:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:04.488 11:35:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:04.488 11:35:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:04.488 11:35:34 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:04.488 192.168.100.9' 00:12:04.488 11:35:34 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:04.488 192.168.100.9' 00:12:04.488 11:35:34 -- nvmf/common.sh@446 -- # head -n 1 00:12:04.488 11:35:34 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:04.488 11:35:34 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:04.488 192.168.100.9' 00:12:04.488 11:35:34 -- nvmf/common.sh@447 -- # tail -n +2 00:12:04.488 11:35:34 -- nvmf/common.sh@447 -- # head -n 1 00:12:04.488 11:35:34 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:04.488 11:35:34 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:04.488 11:35:34 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:04.488 11:35:34 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:04.488 11:35:34 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:04.488 11:35:34 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:04.489 11:35:34 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:04.489 11:35:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:04.489 11:35:34 -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:04.489 11:35:34 -- common/autotest_common.sh@10 -- # set +x 00:12:04.489 11:35:34 -- nvmf/common.sh@470 -- # nvmfpid=2982640 00:12:04.489 11:35:34 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:04.489 11:35:34 -- nvmf/common.sh@471 -- # waitforlisten 2982640 00:12:04.489 11:35:34 -- common/autotest_common.sh@827 -- # '[' -z 2982640 ']' 00:12:04.489 11:35:34 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.489 11:35:34 -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:04.489 11:35:34 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.489 11:35:34 -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:04.489 11:35:34 -- common/autotest_common.sh@10 -- # set +x 00:12:04.489 [2024-05-15 11:35:34.521853] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:12:04.489 [2024-05-15 11:35:34.521913] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.489 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.489 [2024-05-15 11:35:34.596602] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:04.489 [2024-05-15 11:35:34.684931] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.489 [2024-05-15 11:35:34.684978] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.489 [2024-05-15 11:35:34.684988] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.489 [2024-05-15 11:35:34.684996] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.489 [2024-05-15 11:35:34.685003] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.489 [2024-05-15 11:35:34.685107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.489 [2024-05-15 11:35:34.685197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.489 [2024-05-15 11:35:34.685199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.748 11:35:35 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:04.748 11:35:35 -- common/autotest_common.sh@860 -- # return 0 00:12:04.748 11:35:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:04.748 11:35:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:04.748 11:35:35 -- common/autotest_common.sh@10 -- # set +x 00:12:04.748 11:35:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.748 11:35:35 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:05.007 [2024-05-15 11:35:35.564608] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ee9400/0x1eed8f0) succeed. 00:12:05.007 [2024-05-15 11:35:35.575160] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1eea9a0/0x1f2ef80) succeed. 00:12:05.007 11:35:35 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:05.265 11:35:35 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:05.266 11:35:35 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:05.524 11:35:36 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:05.524 11:35:36 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:05.524 11:35:36 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:05.783 11:35:36 -- target/nvmf_lvol.sh@29 -- # lvs=535c88ac-149f-4180-af08-311962617a27 00:12:05.784 11:35:36 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 535c88ac-149f-4180-af08-311962617a27 lvol 20 00:12:06.042 11:35:36 -- target/nvmf_lvol.sh@32 -- # lvol=b8da0287-ba65-4d01-a1a2-3566cf4ee433 00:12:06.042 11:35:36 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:06.301 11:35:36 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b8da0287-ba65-4d01-a1a2-3566cf4ee433 00:12:06.301 11:35:37 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:06.559 [2024-05-15 11:35:37.195283] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:06.559 [2024-05-15 11:35:37.195639] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:06.560 11:35:37 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:06.818 11:35:37 -- target/nvmf_lvol.sh@42 -- # perf_pid=2983036 00:12:06.818 11:35:37 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:06.818 11:35:37 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:06.818 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.760 11:35:38 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b8da0287-ba65-4d01-a1a2-3566cf4ee433 MY_SNAPSHOT 00:12:08.019 11:35:38 -- target/nvmf_lvol.sh@47 -- # snapshot=2108011f-1e80-46b9-8a10-b597218dc4a7 00:12:08.019 11:35:38 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b8da0287-ba65-4d01-a1a2-3566cf4ee433 30 00:12:08.278 11:35:38 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2108011f-1e80-46b9-8a10-b597218dc4a7 MY_CLONE 00:12:08.278 11:35:38 -- target/nvmf_lvol.sh@49 -- # clone=a7085885-a6ab-4b45-9d48-e1da38fc4200 00:12:08.278 11:35:38 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a7085885-a6ab-4b45-9d48-e1da38fc4200 00:12:08.537 11:35:39 -- target/nvmf_lvol.sh@53 -- # wait 2983036 00:12:18.514 Initializing NVMe Controllers 00:12:18.514 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:12:18.514 Controller IO queue size 128, less than required. 00:12:18.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:18.514 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:18.514 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:18.514 Initialization complete. Launching workers. 00:12:18.514 ======================================================== 00:12:18.514 Latency(us) 00:12:18.514 Device Information : IOPS MiB/s Average min max 00:12:18.514 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17356.30 67.80 7376.61 2134.10 38085.67 00:12:18.514 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17161.10 67.04 7460.57 3277.53 45915.13 00:12:18.514 ======================================================== 00:12:18.514 Total : 34517.40 134.83 7418.36 2134.10 45915.13 00:12:18.514 00:12:18.514 11:35:48 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:18.514 11:35:48 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b8da0287-ba65-4d01-a1a2-3566cf4ee433 00:12:18.514 11:35:49 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 535c88ac-149f-4180-af08-311962617a27 00:12:18.774 11:35:49 -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:18.774 11:35:49 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:18.774 11:35:49 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:18.774 11:35:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:18.774 11:35:49 -- nvmf/common.sh@117 -- # sync 00:12:18.774 11:35:49 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:18.774 11:35:49 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:18.774 11:35:49 -- nvmf/common.sh@120 -- # set +e 00:12:18.774 11:35:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:18.774 11:35:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:18.774 rmmod nvme_rdma 00:12:18.774 rmmod nvme_fabrics 00:12:18.774 11:35:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:18.774 11:35:49 -- nvmf/common.sh@124 -- # set -e 00:12:18.774 11:35:49 -- nvmf/common.sh@125 -- # return 0 00:12:18.774 11:35:49 -- nvmf/common.sh@478 -- # '[' -n 2982640 ']' 00:12:18.774 11:35:49 -- nvmf/common.sh@479 -- # killprocess 2982640 00:12:18.774 11:35:49 -- common/autotest_common.sh@946 -- # '[' -z 2982640 ']' 00:12:18.774 11:35:49 -- common/autotest_common.sh@950 -- # kill -0 2982640 00:12:18.774 11:35:49 -- common/autotest_common.sh@951 -- # uname 00:12:18.774 11:35:49 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:18.774 11:35:49 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2982640 00:12:18.774 11:35:49 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:18.774 11:35:49 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:18.774 11:35:49 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2982640' 00:12:18.774 killing process with pid 2982640 00:12:18.774 11:35:49 -- common/autotest_common.sh@965 -- # kill 2982640 00:12:18.774 [2024-05-15 11:35:49.499524] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:18.774 11:35:49 -- common/autotest_common.sh@970 -- # wait 2982640 00:12:19.033 [2024-05-15 11:35:49.571503] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:12:19.292 11:35:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:19.292 11:35:49 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:19.292 00:12:19.292 real 0m21.428s 00:12:19.292 user 1m11.528s 00:12:19.292 sys 0m5.823s 00:12:19.292 11:35:49 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:19.292 11:35:49 -- common/autotest_common.sh@10 -- # set +x 00:12:19.292 ************************************ 00:12:19.292 END TEST nvmf_lvol 00:12:19.292 ************************************ 00:12:19.292 11:35:49 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:12:19.292 11:35:49 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:19.292 11:35:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:19.292 11:35:49 -- common/autotest_common.sh@10 -- # set +x 00:12:19.292 ************************************ 00:12:19.292 START TEST nvmf_lvs_grow 00:12:19.292 ************************************ 00:12:19.292 11:35:49 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:12:19.292 * Looking for test storage... 00:12:19.552 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:19.552 11:35:50 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.552 11:35:50 -- nvmf/common.sh@7 -- # uname -s 00:12:19.552 11:35:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.552 11:35:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.552 11:35:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.552 11:35:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.552 11:35:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.552 11:35:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.552 11:35:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.552 11:35:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.552 11:35:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.552 11:35:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.552 11:35:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:12:19.552 11:35:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:12:19.552 11:35:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.552 11:35:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.552 11:35:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.552 11:35:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.552 11:35:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:19.552 11:35:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.552 11:35:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.552 11:35:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.552 11:35:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.552 11:35:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.552 11:35:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.552 11:35:50 -- paths/export.sh@5 -- # export PATH 00:12:19.552 11:35:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.552 11:35:50 -- nvmf/common.sh@47 -- # : 0 00:12:19.552 11:35:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.552 11:35:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.552 11:35:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.552 11:35:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.552 11:35:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.552 11:35:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.552 11:35:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.552 11:35:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.552 11:35:50 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:19.552 11:35:50 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:19.552 11:35:50 -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:19.552 11:35:50 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:19.552 11:35:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.552 11:35:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:19.552 11:35:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:19.552 11:35:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:19.552 11:35:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.552 11:35:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.552 11:35:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.552 11:35:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:19.552 11:35:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:19.552 11:35:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.552 11:35:50 -- common/autotest_common.sh@10 -- # set +x 00:12:24.825 11:35:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:24.825 11:35:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:24.825 11:35:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:24.825 11:35:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:24.825 11:35:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:24.826 11:35:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:24.826 11:35:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:24.826 11:35:55 -- nvmf/common.sh@295 -- # net_devs=() 00:12:24.826 11:35:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:24.826 11:35:55 -- nvmf/common.sh@296 -- # e810=() 00:12:24.826 11:35:55 -- nvmf/common.sh@296 -- # local -ga e810 00:12:24.826 11:35:55 -- nvmf/common.sh@297 -- # x722=() 00:12:24.826 11:35:55 -- nvmf/common.sh@297 -- # local -ga x722 00:12:24.826 11:35:55 -- nvmf/common.sh@298 -- # mlx=() 00:12:24.826 11:35:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:24.826 11:35:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.826 11:35:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.826 11:35:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.826 11:35:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.826 11:35:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.826 11:35:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.826 11:35:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.826 11:35:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.826 11:35:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.826 11:35:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.826 11:35:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.826 11:35:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:24.826 11:35:55 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:24.826 11:35:55 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:24.826 11:35:55 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:24.826 11:35:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:24.826 11:35:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.826 11:35:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:24.826 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:24.826 11:35:55 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:24.826 11:35:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.826 11:35:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:24.826 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:24.826 11:35:55 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:24.826 11:35:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:24.826 11:35:55 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.826 11:35:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.826 11:35:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:24.826 11:35:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.826 11:35:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:24.826 Found net devices under 0000:18:00.0: mlx_0_0 00:12:24.826 11:35:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.826 11:35:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.826 11:35:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.826 11:35:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:24.826 11:35:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.826 11:35:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:24.826 Found net devices under 0000:18:00.1: mlx_0_1 00:12:24.826 11:35:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.826 11:35:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:24.826 11:35:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:24.826 11:35:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:24.826 11:35:55 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:24.826 11:35:55 -- nvmf/common.sh@58 -- # uname 00:12:24.826 11:35:55 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:24.826 11:35:55 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:24.826 11:35:55 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:24.826 11:35:55 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:24.826 11:35:55 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:24.826 11:35:55 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:24.826 11:35:55 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:24.826 11:35:55 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:24.826 11:35:55 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:24.826 11:35:55 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:24.826 11:35:55 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:24.826 11:35:55 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:24.826 11:35:55 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:24.826 11:35:55 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:24.826 11:35:55 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:24.826 11:35:55 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:24.826 11:35:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:24.826 11:35:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:24.826 11:35:55 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:24.826 11:35:55 -- nvmf/common.sh@105 -- # continue 2 00:12:24.826 11:35:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:24.826 11:35:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:24.826 11:35:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:24.826 11:35:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:24.826 11:35:55 -- nvmf/common.sh@105 -- # continue 2 00:12:24.826 11:35:55 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:24.826 11:35:55 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:24.826 11:35:55 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:24.826 11:35:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:24.826 11:35:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:24.826 11:35:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:24.826 11:35:55 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:24.826 11:35:55 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:24.826 11:35:55 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:24.826 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:24.826 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:12:24.826 altname enp24s0f0np0 00:12:24.826 altname ens785f0np0 00:12:24.826 inet 192.168.100.8/24 scope global mlx_0_0 00:12:24.826 valid_lft forever preferred_lft forever 00:12:24.826 11:35:55 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:25.085 11:35:55 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:25.085 11:35:55 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:25.085 11:35:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:25.085 11:35:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:25.085 11:35:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:25.085 11:35:55 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:25.085 11:35:55 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:25.085 11:35:55 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:25.085 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:25.085 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:12:25.085 altname enp24s0f1np1 00:12:25.085 altname ens785f1np1 00:12:25.085 inet 192.168.100.9/24 scope global mlx_0_1 00:12:25.085 valid_lft forever preferred_lft forever 00:12:25.085 11:35:55 -- nvmf/common.sh@411 -- # return 0 00:12:25.085 11:35:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:25.085 11:35:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:25.085 11:35:55 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:25.085 11:35:55 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:25.085 11:35:55 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:25.085 11:35:55 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:25.085 11:35:55 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:25.085 11:35:55 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:25.085 11:35:55 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:25.085 11:35:55 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:25.085 11:35:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:25.085 11:35:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.085 11:35:55 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:25.085 11:35:55 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:25.085 11:35:55 -- nvmf/common.sh@105 -- # continue 2 00:12:25.085 11:35:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:25.085 11:35:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.085 11:35:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:25.085 11:35:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.085 11:35:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:25.085 11:35:55 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:25.085 11:35:55 -- nvmf/common.sh@105 -- # continue 2 00:12:25.085 11:35:55 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:25.085 11:35:55 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:25.085 11:35:55 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:25.085 11:35:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:25.085 11:35:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:25.085 11:35:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:25.085 11:35:55 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:25.085 11:35:55 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:25.085 11:35:55 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:25.085 11:35:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:25.085 11:35:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:25.085 11:35:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:25.085 11:35:55 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:25.085 192.168.100.9' 00:12:25.085 11:35:55 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:25.085 192.168.100.9' 00:12:25.085 11:35:55 -- nvmf/common.sh@446 -- # head -n 1 00:12:25.085 11:35:55 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:25.085 11:35:55 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:25.085 192.168.100.9' 00:12:25.085 11:35:55 -- nvmf/common.sh@447 -- # tail -n +2 00:12:25.085 11:35:55 -- nvmf/common.sh@447 -- # head -n 1 00:12:25.085 11:35:55 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:25.085 11:35:55 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:25.085 11:35:55 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:25.085 11:35:55 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:25.085 11:35:55 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:25.085 11:35:55 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:25.085 11:35:55 -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:25.085 11:35:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:25.085 11:35:55 -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:25.085 11:35:55 -- common/autotest_common.sh@10 -- # set +x 00:12:25.085 11:35:55 -- nvmf/common.sh@470 -- # nvmfpid=2987397 00:12:25.085 11:35:55 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:25.085 11:35:55 -- nvmf/common.sh@471 -- # waitforlisten 2987397 00:12:25.085 11:35:55 -- common/autotest_common.sh@827 -- # '[' -z 2987397 ']' 00:12:25.085 11:35:55 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.085 11:35:55 -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:25.085 11:35:55 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.085 11:35:55 -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:25.085 11:35:55 -- common/autotest_common.sh@10 -- # set +x 00:12:25.085 [2024-05-15 11:35:55.775701] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:12:25.085 [2024-05-15 11:35:55.775762] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.085 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.085 [2024-05-15 11:35:55.847995] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.344 [2024-05-15 11:35:55.939190] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.344 [2024-05-15 11:35:55.939233] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.344 [2024-05-15 11:35:55.939244] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.344 [2024-05-15 11:35:55.939255] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.344 [2024-05-15 11:35:55.939264] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.344 [2024-05-15 11:35:55.939286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.910 11:35:56 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:25.910 11:35:56 -- common/autotest_common.sh@860 -- # return 0 00:12:25.910 11:35:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:25.910 11:35:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:25.910 11:35:56 -- common/autotest_common.sh@10 -- # set +x 00:12:25.910 11:35:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.910 11:35:56 -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:26.169 [2024-05-15 11:35:56.809022] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22afdb0/0x22b42a0) succeed. 00:12:26.169 [2024-05-15 11:35:56.818589] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22b12b0/0x22f5930) succeed. 00:12:26.169 11:35:56 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:26.169 11:35:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:26.169 11:35:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:26.169 11:35:56 -- common/autotest_common.sh@10 -- # set +x 00:12:26.169 ************************************ 00:12:26.169 START TEST lvs_grow_clean 00:12:26.169 ************************************ 00:12:26.169 11:35:56 -- common/autotest_common.sh@1121 -- # lvs_grow 00:12:26.169 11:35:56 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:26.169 11:35:56 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:26.169 11:35:56 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:26.428 11:35:56 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:26.428 11:35:56 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:26.428 11:35:56 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:26.428 11:35:56 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:26.428 11:35:56 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:26.428 11:35:56 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:26.428 11:35:57 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:26.428 11:35:57 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:26.687 11:35:57 -- target/nvmf_lvs_grow.sh@28 -- # lvs=afc26e8d-e166-4182-94dc-80114b115d2c 00:12:26.687 11:35:57 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afc26e8d-e166-4182-94dc-80114b115d2c 00:12:26.687 11:35:57 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:26.946 11:35:57 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:26.946 11:35:57 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:26.946 11:35:57 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u afc26e8d-e166-4182-94dc-80114b115d2c lvol 150 00:12:26.946 11:35:57 -- target/nvmf_lvs_grow.sh@33 -- # lvol=e4458cc3-19e7-4317-8ba0-2e6256faeb20 00:12:26.946 11:35:57 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:26.946 11:35:57 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:27.205 [2024-05-15 11:35:57.830694] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:27.205 [2024-05-15 11:35:57.830755] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:27.205 true 00:12:27.205 11:35:57 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afc26e8d-e166-4182-94dc-80114b115d2c 00:12:27.205 11:35:57 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:27.464 11:35:58 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:27.464 11:35:58 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:27.464 11:35:58 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e4458cc3-19e7-4317-8ba0-2e6256faeb20 00:12:27.722 11:35:58 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:27.982 [2024-05-15 11:35:58.520613] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:27.982 [2024-05-15 11:35:58.520954] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:27.982 11:35:58 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:27.982 11:35:58 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2987808 00:12:27.982 11:35:58 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:27.982 11:35:58 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2987808 /var/tmp/bdevperf.sock 00:12:27.982 11:35:58 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:27.982 11:35:58 -- common/autotest_common.sh@827 -- # '[' -z 2987808 ']' 00:12:27.982 11:35:58 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:27.982 11:35:58 -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:27.982 11:35:58 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:27.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:27.982 11:35:58 -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:27.982 11:35:58 -- common/autotest_common.sh@10 -- # set +x 00:12:27.982 [2024-05-15 11:35:58.731692] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:12:27.982 [2024-05-15 11:35:58.731751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2987808 ] 00:12:28.240 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.240 [2024-05-15 11:35:58.804828] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.240 [2024-05-15 11:35:58.896248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.808 11:35:59 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:28.808 11:35:59 -- common/autotest_common.sh@860 -- # return 0 00:12:28.808 11:35:59 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:29.067 Nvme0n1 00:12:29.067 11:35:59 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:29.326 [ 00:12:29.326 { 00:12:29.326 "name": "Nvme0n1", 00:12:29.326 "aliases": [ 00:12:29.326 "e4458cc3-19e7-4317-8ba0-2e6256faeb20" 00:12:29.326 ], 00:12:29.326 "product_name": "NVMe disk", 00:12:29.326 "block_size": 4096, 00:12:29.326 "num_blocks": 38912, 00:12:29.326 "uuid": "e4458cc3-19e7-4317-8ba0-2e6256faeb20", 00:12:29.326 "assigned_rate_limits": { 00:12:29.326 "rw_ios_per_sec": 0, 00:12:29.326 "rw_mbytes_per_sec": 0, 00:12:29.326 "r_mbytes_per_sec": 0, 00:12:29.326 "w_mbytes_per_sec": 0 00:12:29.326 }, 00:12:29.326 "claimed": false, 00:12:29.326 "zoned": false, 00:12:29.326 "supported_io_types": { 00:12:29.326 "read": true, 00:12:29.326 "write": true, 00:12:29.326 "unmap": true, 00:12:29.326 "write_zeroes": true, 00:12:29.326 "flush": true, 00:12:29.326 "reset": true, 00:12:29.326 "compare": true, 00:12:29.326 "compare_and_write": true, 00:12:29.326 "abort": true, 00:12:29.326 "nvme_admin": true, 00:12:29.326 "nvme_io": true 00:12:29.326 }, 00:12:29.326 "memory_domains": [ 00:12:29.326 { 00:12:29.326 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:12:29.326 "dma_device_type": 0 00:12:29.326 } 00:12:29.326 ], 00:12:29.326 "driver_specific": { 00:12:29.326 "nvme": [ 00:12:29.326 { 00:12:29.326 "trid": { 00:12:29.326 "trtype": "RDMA", 00:12:29.326 "adrfam": "IPv4", 00:12:29.326 "traddr": "192.168.100.8", 00:12:29.326 "trsvcid": "4420", 00:12:29.326 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:29.326 }, 00:12:29.326 "ctrlr_data": { 00:12:29.326 "cntlid": 1, 00:12:29.326 "vendor_id": "0x8086", 00:12:29.326 "model_number": "SPDK bdev Controller", 00:12:29.326 "serial_number": "SPDK0", 00:12:29.326 "firmware_revision": "24.05", 00:12:29.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:29.326 "oacs": { 00:12:29.326 "security": 0, 00:12:29.326 "format": 0, 00:12:29.326 "firmware": 0, 00:12:29.326 "ns_manage": 0 00:12:29.326 }, 00:12:29.326 "multi_ctrlr": true, 00:12:29.326 "ana_reporting": false 00:12:29.326 }, 00:12:29.326 "vs": { 00:12:29.326 "nvme_version": "1.3" 00:12:29.326 }, 00:12:29.326 "ns_data": { 00:12:29.326 "id": 1, 00:12:29.326 "can_share": true 00:12:29.326 } 00:12:29.326 } 00:12:29.326 ], 00:12:29.326 "mp_policy": "active_passive" 00:12:29.326 } 00:12:29.326 } 00:12:29.326 ] 00:12:29.326 11:35:59 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2987993 00:12:29.326 11:35:59 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:29.326 11:35:59 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:29.326 Running I/O for 10 seconds... 00:12:30.704 Latency(us) 00:12:30.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:30.704 Nvme0n1 : 1.00 35520.00 138.75 0.00 0.00 0.00 0.00 0.00 00:12:30.704 =================================================================================================================== 00:12:30.704 Total : 35520.00 138.75 0.00 0.00 0.00 0.00 0.00 00:12:30.704 00:12:31.273 11:36:01 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u afc26e8d-e166-4182-94dc-80114b115d2c 00:12:31.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.530 Nvme0n1 : 2.00 35806.50 139.87 0.00 0.00 0.00 0.00 0.00 00:12:31.530 =================================================================================================================== 00:12:31.530 Total : 35806.50 139.87 0.00 0.00 0.00 0.00 0.00 00:12:31.530 00:12:31.530 true 00:12:31.530 11:36:02 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afc26e8d-e166-4182-94dc-80114b115d2c 00:12:31.530 11:36:02 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:31.789 11:36:02 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:31.789 11:36:02 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:31.789 11:36:02 -- target/nvmf_lvs_grow.sh@65 -- # wait 2987993 00:12:32.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:32.355 Nvme0n1 : 3.00 36051.33 140.83 0.00 0.00 0.00 0.00 0.00 00:12:32.355 =================================================================================================================== 00:12:32.355 Total : 36051.33 140.83 0.00 0.00 0.00 0.00 0.00 00:12:32.355 00:12:33.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:33.730 Nvme0n1 : 4.00 36225.75 141.51 0.00 0.00 0.00 0.00 0.00 00:12:33.730 =================================================================================================================== 00:12:33.730 Total : 36225.75 141.51 0.00 0.00 0.00 0.00 0.00 00:12:33.730 00:12:34.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.667 Nvme0n1 : 5.00 36155.20 141.23 0.00 0.00 0.00 0.00 0.00 00:12:34.667 =================================================================================================================== 00:12:34.667 Total : 36155.20 141.23 0.00 0.00 0.00 0.00 0.00 00:12:34.667 00:12:35.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:35.600 Nvme0n1 : 6.00 36203.83 141.42 0.00 0.00 0.00 0.00 0.00 00:12:35.600 =================================================================================================================== 00:12:35.600 Total : 36203.83 141.42 0.00 0.00 0.00 0.00 0.00 00:12:35.600 00:12:36.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:36.537 Nvme0n1 : 7.00 36318.86 141.87 0.00 0.00 0.00 0.00 0.00 00:12:36.537 =================================================================================================================== 00:12:36.537 Total : 36318.86 141.87 0.00 0.00 0.00 0.00 0.00 00:12:36.537 00:12:37.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:37.542 Nvme0n1 : 8.00 36392.88 142.16 0.00 0.00 0.00 0.00 0.00 00:12:37.542 =================================================================================================================== 00:12:37.542 Total : 36392.88 142.16 0.00 0.00 0.00 0.00 0.00 00:12:37.542 00:12:38.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:38.476 Nvme0n1 : 9.00 36450.78 142.39 0.00 0.00 0.00 0.00 0.00 00:12:38.476 =================================================================================================================== 00:12:38.476 Total : 36450.78 142.39 0.00 0.00 0.00 0.00 0.00 00:12:38.476 00:12:39.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.410 Nvme0n1 : 10.00 36475.90 142.48 0.00 0.00 0.00 0.00 0.00 00:12:39.410 =================================================================================================================== 00:12:39.410 Total : 36475.90 142.48 0.00 0.00 0.00 0.00 0.00 00:12:39.410 00:12:39.410 00:12:39.410 Latency(us) 00:12:39.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.410 Nvme0n1 : 10.00 36475.53 142.48 0.00 0.00 3506.18 2478.97 14075.99 00:12:39.410 =================================================================================================================== 00:12:39.410 Total : 36475.53 142.48 0.00 0.00 3506.18 2478.97 14075.99 00:12:39.410 0 00:12:39.410 11:36:10 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2987808 00:12:39.410 11:36:10 -- common/autotest_common.sh@946 -- # '[' -z 2987808 ']' 00:12:39.410 11:36:10 -- common/autotest_common.sh@950 -- # kill -0 2987808 00:12:39.410 11:36:10 -- common/autotest_common.sh@951 -- # uname 00:12:39.410 11:36:10 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:39.410 11:36:10 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2987808 00:12:39.410 11:36:10 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:39.410 11:36:10 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:39.410 11:36:10 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2987808' 00:12:39.410 killing process with pid 2987808 00:12:39.410 11:36:10 -- common/autotest_common.sh@965 -- # kill 2987808 00:12:39.410 Received shutdown signal, test time was about 10.000000 seconds 00:12:39.410 00:12:39.410 Latency(us) 00:12:39.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.410 =================================================================================================================== 00:12:39.410 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:39.410 11:36:10 -- common/autotest_common.sh@970 -- # wait 2987808 00:12:39.668 11:36:10 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:39.926 11:36:10 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:40.186 11:36:10 -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afc26e8d-e166-4182-94dc-80114b115d2c 00:12:40.186 11:36:10 -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:40.186 11:36:10 -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:40.186 11:36:10 -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:40.186 11:36:10 -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:40.445 [2024-05-15 11:36:11.060663] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:40.445 11:36:11 -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afc26e8d-e166-4182-94dc-80114b115d2c 00:12:40.445 11:36:11 -- common/autotest_common.sh@648 -- # local es=0 00:12:40.445 11:36:11 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afc26e8d-e166-4182-94dc-80114b115d2c 00:12:40.445 11:36:11 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:40.445 11:36:11 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.445 11:36:11 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:40.445 11:36:11 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.445 11:36:11 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:40.445 11:36:11 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.445 11:36:11 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:40.445 11:36:11 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:40.445 11:36:11 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afc26e8d-e166-4182-94dc-80114b115d2c 00:12:40.704 request: 00:12:40.704 { 00:12:40.704 "uuid": "afc26e8d-e166-4182-94dc-80114b115d2c", 00:12:40.704 "method": "bdev_lvol_get_lvstores", 00:12:40.704 "req_id": 1 00:12:40.704 } 00:12:40.704 Got JSON-RPC error response 00:12:40.704 response: 00:12:40.704 { 00:12:40.704 "code": -19, 00:12:40.704 "message": "No such device" 00:12:40.704 } 00:12:40.704 11:36:11 -- common/autotest_common.sh@651 -- # es=1 00:12:40.704 11:36:11 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:40.704 11:36:11 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:40.704 11:36:11 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:40.704 11:36:11 -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:40.704 aio_bdev 00:12:40.704 11:36:11 -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e4458cc3-19e7-4317-8ba0-2e6256faeb20 00:12:40.704 11:36:11 -- common/autotest_common.sh@895 -- # local bdev_name=e4458cc3-19e7-4317-8ba0-2e6256faeb20 00:12:40.704 11:36:11 -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:40.704 11:36:11 -- common/autotest_common.sh@897 -- # local i 00:12:40.704 11:36:11 -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:40.704 11:36:11 -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:40.704 11:36:11 -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:40.963 11:36:11 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e4458cc3-19e7-4317-8ba0-2e6256faeb20 -t 2000 00:12:41.221 [ 00:12:41.221 { 00:12:41.221 "name": "e4458cc3-19e7-4317-8ba0-2e6256faeb20", 00:12:41.221 "aliases": [ 00:12:41.221 "lvs/lvol" 00:12:41.221 ], 00:12:41.221 "product_name": "Logical Volume", 00:12:41.221 "block_size": 4096, 00:12:41.221 "num_blocks": 38912, 00:12:41.221 "uuid": "e4458cc3-19e7-4317-8ba0-2e6256faeb20", 00:12:41.221 "assigned_rate_limits": { 00:12:41.221 "rw_ios_per_sec": 0, 00:12:41.221 "rw_mbytes_per_sec": 0, 00:12:41.221 "r_mbytes_per_sec": 0, 00:12:41.221 "w_mbytes_per_sec": 0 00:12:41.221 }, 00:12:41.221 "claimed": false, 00:12:41.221 "zoned": false, 00:12:41.221 "supported_io_types": { 00:12:41.221 "read": true, 00:12:41.221 "write": true, 00:12:41.221 "unmap": true, 00:12:41.221 "write_zeroes": true, 00:12:41.221 "flush": false, 00:12:41.221 "reset": true, 00:12:41.221 "compare": false, 00:12:41.221 "compare_and_write": false, 00:12:41.221 "abort": false, 00:12:41.221 "nvme_admin": false, 00:12:41.221 "nvme_io": false 00:12:41.221 }, 00:12:41.221 "driver_specific": { 00:12:41.221 "lvol": { 00:12:41.221 "lvol_store_uuid": "afc26e8d-e166-4182-94dc-80114b115d2c", 00:12:41.221 "base_bdev": "aio_bdev", 00:12:41.221 "thin_provision": false, 00:12:41.221 "snapshot": false, 00:12:41.221 "clone": false, 00:12:41.221 "esnap_clone": false 00:12:41.221 } 00:12:41.221 } 00:12:41.221 } 00:12:41.221 ] 00:12:41.221 11:36:11 -- common/autotest_common.sh@903 -- # return 0 00:12:41.221 11:36:11 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afc26e8d-e166-4182-94dc-80114b115d2c 00:12:41.221 11:36:11 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:41.221 11:36:11 -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:41.221 11:36:11 -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:41.221 11:36:11 -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afc26e8d-e166-4182-94dc-80114b115d2c 00:12:41.480 11:36:12 -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:41.480 11:36:12 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e4458cc3-19e7-4317-8ba0-2e6256faeb20 00:12:41.739 11:36:12 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u afc26e8d-e166-4182-94dc-80114b115d2c 00:12:41.739 11:36:12 -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:41.998 11:36:12 -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:41.998 00:12:41.998 real 0m15.741s 00:12:41.998 user 0m15.553s 00:12:41.998 sys 0m1.278s 00:12:41.998 11:36:12 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:41.998 11:36:12 -- common/autotest_common.sh@10 -- # set +x 00:12:41.998 ************************************ 00:12:41.998 END TEST lvs_grow_clean 00:12:41.998 ************************************ 00:12:41.998 11:36:12 -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:41.998 11:36:12 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:41.998 11:36:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:41.998 11:36:12 -- common/autotest_common.sh@10 -- # set +x 00:12:41.998 ************************************ 00:12:41.998 START TEST lvs_grow_dirty 00:12:41.998 ************************************ 00:12:41.998 11:36:12 -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:12:41.998 11:36:12 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:41.998 11:36:12 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:41.998 11:36:12 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:41.998 11:36:12 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:41.998 11:36:12 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:41.998 11:36:12 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:41.998 11:36:12 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:41.998 11:36:12 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:41.999 11:36:12 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:42.257 11:36:12 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:42.257 11:36:12 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:42.516 11:36:13 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b66bd3c9-8813-481f-82a3-b885f2fed0b3 00:12:42.516 11:36:13 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b66bd3c9-8813-481f-82a3-b885f2fed0b3 00:12:42.516 11:36:13 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:42.775 11:36:13 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:42.775 11:36:13 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:42.775 11:36:13 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b66bd3c9-8813-481f-82a3-b885f2fed0b3 lvol 150 00:12:42.775 11:36:13 -- target/nvmf_lvs_grow.sh@33 -- # lvol=e5869bc9-b87a-490a-aa62-9176dcace708 00:12:42.775 11:36:13 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:42.775 11:36:13 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:43.033 [2024-05-15 11:36:13.642663] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:43.033 [2024-05-15 11:36:13.642721] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:43.033 true 00:12:43.033 11:36:13 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b66bd3c9-8813-481f-82a3-b885f2fed0b3 00:12:43.033 11:36:13 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:43.291 11:36:13 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:43.291 11:36:13 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:43.291 11:36:13 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e5869bc9-b87a-490a-aa62-9176dcace708 00:12:43.549 11:36:14 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:43.808 [2024-05-15 11:36:14.336927] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:43.808 11:36:14 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:43.808 11:36:14 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:43.808 11:36:14 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2990546 00:12:43.808 11:36:14 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:43.808 11:36:14 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2990546 /var/tmp/bdevperf.sock 00:12:43.808 11:36:14 -- common/autotest_common.sh@827 -- # '[' -z 2990546 ']' 00:12:43.808 11:36:14 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:43.808 11:36:14 -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:43.808 11:36:14 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:43.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:43.808 11:36:14 -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:43.808 11:36:14 -- common/autotest_common.sh@10 -- # set +x 00:12:43.808 [2024-05-15 11:36:14.551629] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:12:43.808 [2024-05-15 11:36:14.551687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2990546 ] 00:12:44.067 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.067 [2024-05-15 11:36:14.622913] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.067 [2024-05-15 11:36:14.704274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.635 11:36:15 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:44.635 11:36:15 -- common/autotest_common.sh@860 -- # return 0 00:12:44.635 11:36:15 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:44.895 Nvme0n1 00:12:44.895 11:36:15 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:45.155 [ 00:12:45.155 { 00:12:45.155 "name": "Nvme0n1", 00:12:45.155 "aliases": [ 00:12:45.155 "e5869bc9-b87a-490a-aa62-9176dcace708" 00:12:45.155 ], 00:12:45.155 "product_name": "NVMe disk", 00:12:45.155 "block_size": 4096, 00:12:45.155 "num_blocks": 38912, 00:12:45.155 "uuid": "e5869bc9-b87a-490a-aa62-9176dcace708", 00:12:45.155 "assigned_rate_limits": { 00:12:45.155 "rw_ios_per_sec": 0, 00:12:45.155 "rw_mbytes_per_sec": 0, 00:12:45.155 "r_mbytes_per_sec": 0, 00:12:45.155 "w_mbytes_per_sec": 0 00:12:45.155 }, 00:12:45.155 "claimed": false, 00:12:45.155 "zoned": false, 00:12:45.155 "supported_io_types": { 00:12:45.155 "read": true, 00:12:45.155 "write": true, 00:12:45.155 "unmap": true, 00:12:45.155 "write_zeroes": true, 00:12:45.155 "flush": true, 00:12:45.155 "reset": true, 00:12:45.155 "compare": true, 00:12:45.155 "compare_and_write": true, 00:12:45.155 "abort": true, 00:12:45.155 "nvme_admin": true, 00:12:45.155 "nvme_io": true 00:12:45.155 }, 00:12:45.155 "memory_domains": [ 00:12:45.155 { 00:12:45.155 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:12:45.155 "dma_device_type": 0 00:12:45.155 } 00:12:45.155 ], 00:12:45.155 "driver_specific": { 00:12:45.155 "nvme": [ 00:12:45.155 { 00:12:45.155 "trid": { 00:12:45.155 "trtype": "RDMA", 00:12:45.155 "adrfam": "IPv4", 00:12:45.155 "traddr": "192.168.100.8", 00:12:45.155 "trsvcid": "4420", 00:12:45.155 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:45.155 }, 00:12:45.155 "ctrlr_data": { 00:12:45.155 "cntlid": 1, 00:12:45.155 "vendor_id": "0x8086", 00:12:45.155 "model_number": "SPDK bdev Controller", 00:12:45.155 "serial_number": "SPDK0", 00:12:45.155 "firmware_revision": "24.05", 00:12:45.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:45.155 "oacs": { 00:12:45.155 "security": 0, 00:12:45.155 "format": 0, 00:12:45.155 "firmware": 0, 00:12:45.155 "ns_manage": 0 00:12:45.155 }, 00:12:45.155 "multi_ctrlr": true, 00:12:45.155 "ana_reporting": false 00:12:45.155 }, 00:12:45.155 "vs": { 00:12:45.155 "nvme_version": "1.3" 00:12:45.155 }, 00:12:45.155 "ns_data": { 00:12:45.155 "id": 1, 00:12:45.155 "can_share": true 00:12:45.155 } 00:12:45.155 } 00:12:45.155 ], 00:12:45.155 "mp_policy": "active_passive" 00:12:45.155 } 00:12:45.155 } 00:12:45.155 ] 00:12:45.155 11:36:15 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2990733 00:12:45.155 11:36:15 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:45.155 11:36:15 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:45.155 Running I/O for 10 seconds... 00:12:46.532 Latency(us) 00:12:46.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:46.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:46.532 Nvme0n1 : 1.00 35845.00 140.02 0.00 0.00 0.00 0.00 0.00 00:12:46.532 =================================================================================================================== 00:12:46.532 Total : 35845.00 140.02 0.00 0.00 0.00 0.00 0.00 00:12:46.532 00:12:47.100 11:36:17 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b66bd3c9-8813-481f-82a3-b885f2fed0b3 00:12:47.359 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:47.359 Nvme0n1 : 2.00 36157.00 141.24 0.00 0.00 0.00 0.00 0.00 00:12:47.359 =================================================================================================================== 00:12:47.359 Total : 36157.00 141.24 0.00 0.00 0.00 0.00 0.00 00:12:47.359 00:12:47.359 true 00:12:47.359 11:36:17 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b66bd3c9-8813-481f-82a3-b885f2fed0b3 00:12:47.359 11:36:17 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:47.618 11:36:18 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:47.618 11:36:18 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:47.618 11:36:18 -- target/nvmf_lvs_grow.sh@65 -- # wait 2990733 00:12:48.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:48.186 Nvme0n1 : 3.00 36265.33 141.66 0.00 0.00 0.00 0.00 0.00 00:12:48.186 =================================================================================================================== 00:12:48.186 Total : 36265.33 141.66 0.00 0.00 0.00 0.00 0.00 00:12:48.186 00:12:49.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:49.564 Nvme0n1 : 4.00 36406.75 142.21 0.00 0.00 0.00 0.00 0.00 00:12:49.564 =================================================================================================================== 00:12:49.564 Total : 36406.75 142.21 0.00 0.00 0.00 0.00 0.00 00:12:49.564 00:12:50.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:50.500 Nvme0n1 : 5.00 36370.20 142.07 0.00 0.00 0.00 0.00 0.00 00:12:50.500 =================================================================================================================== 00:12:50.500 Total : 36370.20 142.07 0.00 0.00 0.00 0.00 0.00 00:12:50.500 00:12:51.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:51.437 Nvme0n1 : 6.00 36395.33 142.17 0.00 0.00 0.00 0.00 0.00 00:12:51.437 =================================================================================================================== 00:12:51.437 Total : 36395.33 142.17 0.00 0.00 0.00 0.00 0.00 00:12:51.437 00:12:52.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:52.373 Nvme0n1 : 7.00 36442.71 142.35 0.00 0.00 0.00 0.00 0.00 00:12:52.373 =================================================================================================================== 00:12:52.373 Total : 36442.71 142.35 0.00 0.00 0.00 0.00 0.00 00:12:52.373 00:12:53.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:53.309 Nvme0n1 : 8.00 36436.25 142.33 0.00 0.00 0.00 0.00 0.00 00:12:53.309 =================================================================================================================== 00:12:53.309 Total : 36436.25 142.33 0.00 0.00 0.00 0.00 0.00 00:12:53.309 00:12:54.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:54.246 Nvme0n1 : 9.00 36490.67 142.54 0.00 0.00 0.00 0.00 0.00 00:12:54.246 =================================================================================================================== 00:12:54.246 Total : 36490.67 142.54 0.00 0.00 0.00 0.00 0.00 00:12:54.246 00:12:55.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:55.181 Nvme0n1 : 10.00 36511.50 142.62 0.00 0.00 0.00 0.00 0.00 00:12:55.181 =================================================================================================================== 00:12:55.182 Total : 36511.50 142.62 0.00 0.00 0.00 0.00 0.00 00:12:55.182 00:12:55.182 00:12:55.182 Latency(us) 00:12:55.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:55.182 Nvme0n1 : 10.00 36511.37 142.62 0.00 0.00 3502.68 2251.02 14019.01 00:12:55.182 =================================================================================================================== 00:12:55.182 Total : 36511.37 142.62 0.00 0.00 3502.68 2251.02 14019.01 00:12:55.182 0 00:12:55.182 11:36:25 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2990546 00:12:55.182 11:36:25 -- common/autotest_common.sh@946 -- # '[' -z 2990546 ']' 00:12:55.182 11:36:25 -- common/autotest_common.sh@950 -- # kill -0 2990546 00:12:55.182 11:36:25 -- common/autotest_common.sh@951 -- # uname 00:12:55.182 11:36:25 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:55.447 11:36:25 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2990546 00:12:55.447 11:36:25 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:55.447 11:36:25 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:55.447 11:36:25 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2990546' 00:12:55.447 killing process with pid 2990546 00:12:55.447 11:36:25 -- common/autotest_common.sh@965 -- # kill 2990546 00:12:55.447 Received shutdown signal, test time was about 10.000000 seconds 00:12:55.447 00:12:55.447 Latency(us) 00:12:55.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.447 =================================================================================================================== 00:12:55.447 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:55.447 11:36:25 -- common/autotest_common.sh@970 -- # wait 2990546 00:12:55.705 11:36:26 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:55.705 11:36:26 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:55.965 11:36:26 -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b66bd3c9-8813-481f-82a3-b885f2fed0b3 00:12:55.965 11:36:26 -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:56.224 11:36:26 -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:56.224 11:36:26 -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:56.224 11:36:26 -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2987397 00:12:56.224 11:36:26 -- target/nvmf_lvs_grow.sh@75 -- # wait 2987397 00:12:56.224 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2987397 Killed "${NVMF_APP[@]}" "$@" 00:12:56.224 11:36:26 -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:56.224 11:36:26 -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:56.224 11:36:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:56.224 11:36:26 -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:56.224 11:36:26 -- common/autotest_common.sh@10 -- # set +x 00:12:56.224 11:36:26 -- nvmf/common.sh@470 -- # nvmfpid=2992197 00:12:56.224 11:36:26 -- nvmf/common.sh@471 -- # waitforlisten 2992197 00:12:56.224 11:36:26 -- common/autotest_common.sh@827 -- # '[' -z 2992197 ']' 00:12:56.224 11:36:26 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.224 11:36:26 -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:56.224 11:36:26 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.224 11:36:26 -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:56.224 11:36:26 -- common/autotest_common.sh@10 -- # set +x 00:12:56.224 11:36:26 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:56.224 [2024-05-15 11:36:26.891920] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:12:56.224 [2024-05-15 11:36:26.891978] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.224 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.224 [2024-05-15 11:36:26.962881] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.483 [2024-05-15 11:36:27.050547] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.483 [2024-05-15 11:36:27.050588] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.483 [2024-05-15 11:36:27.050598] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.483 [2024-05-15 11:36:27.050606] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.483 [2024-05-15 11:36:27.050614] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.483 [2024-05-15 11:36:27.050634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.052 11:36:27 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:57.052 11:36:27 -- common/autotest_common.sh@860 -- # return 0 00:12:57.052 11:36:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:57.052 11:36:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:57.052 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:12:57.052 11:36:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.052 11:36:27 -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:57.311 [2024-05-15 11:36:27.893609] blobstore.c:4789:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:57.311 [2024-05-15 11:36:27.893698] blobstore.c:4736:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:57.311 [2024-05-15 11:36:27.893731] blobstore.c:4736:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:57.311 11:36:27 -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:57.311 11:36:27 -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e5869bc9-b87a-490a-aa62-9176dcace708 00:12:57.311 11:36:27 -- common/autotest_common.sh@895 -- # local bdev_name=e5869bc9-b87a-490a-aa62-9176dcace708 00:12:57.311 11:36:27 -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:57.311 11:36:27 -- common/autotest_common.sh@897 -- # local i 00:12:57.311 11:36:27 -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:57.311 11:36:27 -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:57.311 11:36:27 -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:57.571 11:36:28 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e5869bc9-b87a-490a-aa62-9176dcace708 -t 2000 00:12:57.571 [ 00:12:57.571 { 00:12:57.571 "name": "e5869bc9-b87a-490a-aa62-9176dcace708", 00:12:57.571 "aliases": [ 00:12:57.571 "lvs/lvol" 00:12:57.571 ], 00:12:57.571 "product_name": "Logical Volume", 00:12:57.571 "block_size": 4096, 00:12:57.571 "num_blocks": 38912, 00:12:57.571 "uuid": "e5869bc9-b87a-490a-aa62-9176dcace708", 00:12:57.571 "assigned_rate_limits": { 00:12:57.571 "rw_ios_per_sec": 0, 00:12:57.571 "rw_mbytes_per_sec": 0, 00:12:57.571 "r_mbytes_per_sec": 0, 00:12:57.571 "w_mbytes_per_sec": 0 00:12:57.571 }, 00:12:57.571 "claimed": false, 00:12:57.571 "zoned": false, 00:12:57.571 "supported_io_types": { 00:12:57.571 "read": true, 00:12:57.571 "write": true, 00:12:57.571 "unmap": true, 00:12:57.571 "write_zeroes": true, 00:12:57.571 "flush": false, 00:12:57.571 "reset": true, 00:12:57.571 "compare": false, 00:12:57.571 "compare_and_write": false, 00:12:57.571 "abort": false, 00:12:57.571 "nvme_admin": false, 00:12:57.571 "nvme_io": false 00:12:57.571 }, 00:12:57.571 "driver_specific": { 00:12:57.571 "lvol": { 00:12:57.571 "lvol_store_uuid": "b66bd3c9-8813-481f-82a3-b885f2fed0b3", 00:12:57.571 "base_bdev": "aio_bdev", 00:12:57.571 "thin_provision": false, 00:12:57.571 "snapshot": false, 00:12:57.571 "clone": false, 00:12:57.571 "esnap_clone": false 00:12:57.571 } 00:12:57.571 } 00:12:57.571 } 00:12:57.571 ] 00:12:57.571 11:36:28 -- common/autotest_common.sh@903 -- # return 0 00:12:57.571 11:36:28 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b66bd3c9-8813-481f-82a3-b885f2fed0b3 00:12:57.571 11:36:28 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:57.830 11:36:28 -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:57.830 11:36:28 -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:57.830 11:36:28 -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b66bd3c9-8813-481f-82a3-b885f2fed0b3 00:12:58.090 11:36:28 -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:58.090 11:36:28 -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:58.090 [2024-05-15 11:36:28.777693] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:58.090 11:36:28 -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b66bd3c9-8813-481f-82a3-b885f2fed0b3 00:12:58.090 11:36:28 -- common/autotest_common.sh@648 -- # local es=0 00:12:58.090 11:36:28 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b66bd3c9-8813-481f-82a3-b885f2fed0b3 00:12:58.090 11:36:28 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:58.090 11:36:28 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.090 11:36:28 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:58.090 11:36:28 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.090 11:36:28 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:58.090 11:36:28 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.090 11:36:28 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:58.090 11:36:28 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:58.090 11:36:28 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b66bd3c9-8813-481f-82a3-b885f2fed0b3 00:12:58.349 request: 00:12:58.349 { 00:12:58.349 "uuid": "b66bd3c9-8813-481f-82a3-b885f2fed0b3", 00:12:58.349 "method": "bdev_lvol_get_lvstores", 00:12:58.349 "req_id": 1 00:12:58.349 } 00:12:58.349 Got JSON-RPC error response 00:12:58.349 response: 00:12:58.349 { 00:12:58.349 "code": -19, 00:12:58.349 "message": "No such device" 00:12:58.349 } 00:12:58.349 11:36:28 -- common/autotest_common.sh@651 -- # es=1 00:12:58.349 11:36:28 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:58.349 11:36:28 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:58.349 11:36:28 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:58.349 11:36:28 -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:58.608 aio_bdev 00:12:58.609 11:36:29 -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e5869bc9-b87a-490a-aa62-9176dcace708 00:12:58.609 11:36:29 -- common/autotest_common.sh@895 -- # local bdev_name=e5869bc9-b87a-490a-aa62-9176dcace708 00:12:58.609 11:36:29 -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:58.609 11:36:29 -- common/autotest_common.sh@897 -- # local i 00:12:58.609 11:36:29 -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:58.609 11:36:29 -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:58.609 11:36:29 -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:58.609 11:36:29 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e5869bc9-b87a-490a-aa62-9176dcace708 -t 2000 00:12:58.868 [ 00:12:58.868 { 00:12:58.868 "name": "e5869bc9-b87a-490a-aa62-9176dcace708", 00:12:58.868 "aliases": [ 00:12:58.868 "lvs/lvol" 00:12:58.868 ], 00:12:58.868 "product_name": "Logical Volume", 00:12:58.868 "block_size": 4096, 00:12:58.868 "num_blocks": 38912, 00:12:58.868 "uuid": "e5869bc9-b87a-490a-aa62-9176dcace708", 00:12:58.868 "assigned_rate_limits": { 00:12:58.868 "rw_ios_per_sec": 0, 00:12:58.868 "rw_mbytes_per_sec": 0, 00:12:58.868 "r_mbytes_per_sec": 0, 00:12:58.868 "w_mbytes_per_sec": 0 00:12:58.868 }, 00:12:58.868 "claimed": false, 00:12:58.868 "zoned": false, 00:12:58.868 "supported_io_types": { 00:12:58.868 "read": true, 00:12:58.868 "write": true, 00:12:58.868 "unmap": true, 00:12:58.868 "write_zeroes": true, 00:12:58.868 "flush": false, 00:12:58.868 "reset": true, 00:12:58.868 "compare": false, 00:12:58.868 "compare_and_write": false, 00:12:58.868 "abort": false, 00:12:58.868 "nvme_admin": false, 00:12:58.868 "nvme_io": false 00:12:58.868 }, 00:12:58.868 "driver_specific": { 00:12:58.868 "lvol": { 00:12:58.868 "lvol_store_uuid": "b66bd3c9-8813-481f-82a3-b885f2fed0b3", 00:12:58.868 "base_bdev": "aio_bdev", 00:12:58.868 "thin_provision": false, 00:12:58.868 "snapshot": false, 00:12:58.868 "clone": false, 00:12:58.868 "esnap_clone": false 00:12:58.868 } 00:12:58.868 } 00:12:58.868 } 00:12:58.868 ] 00:12:58.868 11:36:29 -- common/autotest_common.sh@903 -- # return 0 00:12:58.868 11:36:29 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b66bd3c9-8813-481f-82a3-b885f2fed0b3 00:12:58.868 11:36:29 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:59.126 11:36:29 -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:59.126 11:36:29 -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b66bd3c9-8813-481f-82a3-b885f2fed0b3 00:12:59.126 11:36:29 -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:59.126 11:36:29 -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:59.126 11:36:29 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e5869bc9-b87a-490a-aa62-9176dcace708 00:12:59.385 11:36:30 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b66bd3c9-8813-481f-82a3-b885f2fed0b3 00:12:59.643 11:36:30 -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:59.902 11:36:30 -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:59.902 00:12:59.902 real 0m17.716s 00:12:59.902 user 0m45.414s 00:12:59.902 sys 0m3.648s 00:12:59.902 11:36:30 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:59.902 11:36:30 -- common/autotest_common.sh@10 -- # set +x 00:12:59.902 ************************************ 00:12:59.902 END TEST lvs_grow_dirty 00:12:59.902 ************************************ 00:12:59.902 11:36:30 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:59.903 11:36:30 -- common/autotest_common.sh@804 -- # type=--id 00:12:59.903 11:36:30 -- common/autotest_common.sh@805 -- # id=0 00:12:59.903 11:36:30 -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:12:59.903 11:36:30 -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:59.903 11:36:30 -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:12:59.903 11:36:30 -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:12:59.903 11:36:30 -- common/autotest_common.sh@816 -- # for n in $shm_files 00:12:59.903 11:36:30 -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:59.903 nvmf_trace.0 00:12:59.903 11:36:30 -- common/autotest_common.sh@819 -- # return 0 00:12:59.903 11:36:30 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:59.903 11:36:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:59.903 11:36:30 -- nvmf/common.sh@117 -- # sync 00:12:59.903 11:36:30 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:59.903 11:36:30 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:59.903 11:36:30 -- nvmf/common.sh@120 -- # set +e 00:12:59.903 11:36:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:59.903 11:36:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:59.903 rmmod nvme_rdma 00:12:59.903 rmmod nvme_fabrics 00:12:59.903 11:36:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:59.903 11:36:30 -- nvmf/common.sh@124 -- # set -e 00:12:59.903 11:36:30 -- nvmf/common.sh@125 -- # return 0 00:12:59.903 11:36:30 -- nvmf/common.sh@478 -- # '[' -n 2992197 ']' 00:12:59.903 11:36:30 -- nvmf/common.sh@479 -- # killprocess 2992197 00:12:59.903 11:36:30 -- common/autotest_common.sh@946 -- # '[' -z 2992197 ']' 00:12:59.903 11:36:30 -- common/autotest_common.sh@950 -- # kill -0 2992197 00:12:59.903 11:36:30 -- common/autotest_common.sh@951 -- # uname 00:12:59.903 11:36:30 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:59.903 11:36:30 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2992197 00:12:59.903 11:36:30 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:59.903 11:36:30 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:59.903 11:36:30 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2992197' 00:12:59.903 killing process with pid 2992197 00:12:59.903 11:36:30 -- common/autotest_common.sh@965 -- # kill 2992197 00:12:59.903 11:36:30 -- common/autotest_common.sh@970 -- # wait 2992197 00:13:00.162 11:36:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:00.162 11:36:30 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:00.162 00:13:00.162 real 0m40.933s 00:13:00.162 user 1m6.942s 00:13:00.162 sys 0m9.759s 00:13:00.162 11:36:30 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:00.162 11:36:30 -- common/autotest_common.sh@10 -- # set +x 00:13:00.162 ************************************ 00:13:00.162 END TEST nvmf_lvs_grow 00:13:00.162 ************************************ 00:13:00.162 11:36:30 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:13:00.162 11:36:30 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:00.162 11:36:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:00.162 11:36:30 -- common/autotest_common.sh@10 -- # set +x 00:13:00.421 ************************************ 00:13:00.421 START TEST nvmf_bdev_io_wait 00:13:00.422 ************************************ 00:13:00.422 11:36:30 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:13:00.422 * Looking for test storage... 00:13:00.422 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:00.422 11:36:31 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.422 11:36:31 -- nvmf/common.sh@7 -- # uname -s 00:13:00.422 11:36:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.422 11:36:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.422 11:36:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.422 11:36:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.422 11:36:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.422 11:36:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.422 11:36:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.422 11:36:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.422 11:36:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.422 11:36:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.422 11:36:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:13:00.422 11:36:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:13:00.422 11:36:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.422 11:36:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.422 11:36:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.422 11:36:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.422 11:36:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:00.422 11:36:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.422 11:36:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.422 11:36:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.422 11:36:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.422 11:36:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.422 11:36:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.422 11:36:31 -- paths/export.sh@5 -- # export PATH 00:13:00.422 11:36:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.422 11:36:31 -- nvmf/common.sh@47 -- # : 0 00:13:00.422 11:36:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.422 11:36:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.422 11:36:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.422 11:36:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.422 11:36:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.422 11:36:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.422 11:36:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.422 11:36:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.422 11:36:31 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:00.422 11:36:31 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:00.422 11:36:31 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:00.422 11:36:31 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:00.422 11:36:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.422 11:36:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:00.422 11:36:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:00.422 11:36:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:00.422 11:36:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.422 11:36:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.422 11:36:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.422 11:36:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:00.422 11:36:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:00.422 11:36:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.422 11:36:31 -- common/autotest_common.sh@10 -- # set +x 00:13:07.071 11:36:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:07.071 11:36:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:07.071 11:36:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:07.071 11:36:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:07.071 11:36:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:07.071 11:36:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:07.071 11:36:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:07.071 11:36:36 -- nvmf/common.sh@295 -- # net_devs=() 00:13:07.071 11:36:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:07.071 11:36:36 -- nvmf/common.sh@296 -- # e810=() 00:13:07.071 11:36:36 -- nvmf/common.sh@296 -- # local -ga e810 00:13:07.071 11:36:36 -- nvmf/common.sh@297 -- # x722=() 00:13:07.071 11:36:36 -- nvmf/common.sh@297 -- # local -ga x722 00:13:07.071 11:36:36 -- nvmf/common.sh@298 -- # mlx=() 00:13:07.071 11:36:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:07.071 11:36:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.071 11:36:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.071 11:36:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.071 11:36:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.071 11:36:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.071 11:36:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.071 11:36:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.071 11:36:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.071 11:36:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.071 11:36:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.071 11:36:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.071 11:36:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:07.071 11:36:36 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:07.071 11:36:36 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:07.071 11:36:36 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:07.071 11:36:36 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:07.071 11:36:36 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:07.071 11:36:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:07.071 11:36:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.071 11:36:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:07.071 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:07.071 11:36:36 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:07.071 11:36:36 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:07.071 11:36:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:07.071 11:36:36 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:07.071 11:36:36 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:07.071 11:36:36 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:07.071 11:36:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.071 11:36:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:07.071 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:07.071 11:36:36 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:07.071 11:36:36 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:07.071 11:36:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:07.071 11:36:36 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:07.071 11:36:36 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:07.071 11:36:36 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:07.071 11:36:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:07.071 11:36:36 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:07.071 11:36:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.071 11:36:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.071 11:36:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:07.071 11:36:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.071 11:36:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:07.071 Found net devices under 0000:18:00.0: mlx_0_0 00:13:07.071 11:36:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.072 11:36:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.072 11:36:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.072 11:36:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:07.072 11:36:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.072 11:36:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:07.072 Found net devices under 0000:18:00.1: mlx_0_1 00:13:07.072 11:36:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.072 11:36:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:07.072 11:36:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:07.072 11:36:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:07.072 11:36:36 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:07.072 11:36:36 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:07.072 11:36:36 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:07.072 11:36:36 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:07.072 11:36:36 -- nvmf/common.sh@58 -- # uname 00:13:07.072 11:36:36 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:07.072 11:36:36 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:07.072 11:36:36 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:07.072 11:36:36 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:07.072 11:36:36 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:07.072 11:36:36 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:07.072 11:36:36 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:07.072 11:36:36 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:07.072 11:36:36 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:07.072 11:36:36 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:07.072 11:36:36 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:07.072 11:36:36 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:07.072 11:36:36 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:07.072 11:36:36 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:07.072 11:36:36 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:07.072 11:36:36 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:07.072 11:36:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:07.072 11:36:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:07.072 11:36:36 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:07.072 11:36:36 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:07.072 11:36:36 -- nvmf/common.sh@105 -- # continue 2 00:13:07.072 11:36:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:07.072 11:36:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:07.072 11:36:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:07.072 11:36:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:07.072 11:36:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:07.072 11:36:36 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:07.072 11:36:36 -- nvmf/common.sh@105 -- # continue 2 00:13:07.072 11:36:36 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:07.072 11:36:36 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:07.072 11:36:36 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:07.072 11:36:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:07.072 11:36:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:07.072 11:36:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:07.072 11:36:36 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:07.072 11:36:36 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:07.072 11:36:36 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:07.072 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:07.072 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:13:07.072 altname enp24s0f0np0 00:13:07.072 altname ens785f0np0 00:13:07.072 inet 192.168.100.8/24 scope global mlx_0_0 00:13:07.072 valid_lft forever preferred_lft forever 00:13:07.072 11:36:36 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:07.072 11:36:36 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:07.072 11:36:36 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:07.072 11:36:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:07.072 11:36:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:07.072 11:36:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:07.072 11:36:36 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:07.072 11:36:36 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:07.072 11:36:36 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:07.072 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:07.072 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:13:07.072 altname enp24s0f1np1 00:13:07.072 altname ens785f1np1 00:13:07.072 inet 192.168.100.9/24 scope global mlx_0_1 00:13:07.072 valid_lft forever preferred_lft forever 00:13:07.072 11:36:36 -- nvmf/common.sh@411 -- # return 0 00:13:07.072 11:36:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:07.072 11:36:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:07.072 11:36:36 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:07.072 11:36:36 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:07.072 11:36:36 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:07.072 11:36:36 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:07.072 11:36:36 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:07.072 11:36:36 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:07.072 11:36:36 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:07.072 11:36:36 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:07.072 11:36:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:07.072 11:36:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:07.072 11:36:36 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:07.072 11:36:36 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:07.072 11:36:36 -- nvmf/common.sh@105 -- # continue 2 00:13:07.072 11:36:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:07.072 11:36:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:07.072 11:36:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:07.072 11:36:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:07.072 11:36:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:07.072 11:36:36 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:07.072 11:36:36 -- nvmf/common.sh@105 -- # continue 2 00:13:07.072 11:36:36 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:07.072 11:36:36 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:07.072 11:36:36 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:07.072 11:36:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:07.072 11:36:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:07.072 11:36:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:07.072 11:36:36 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:07.072 11:36:36 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:07.072 11:36:36 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:07.072 11:36:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:07.072 11:36:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:07.072 11:36:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:07.072 11:36:36 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:07.072 192.168.100.9' 00:13:07.072 11:36:36 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:07.072 192.168.100.9' 00:13:07.072 11:36:36 -- nvmf/common.sh@446 -- # head -n 1 00:13:07.072 11:36:36 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:07.072 11:36:36 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:07.072 192.168.100.9' 00:13:07.072 11:36:36 -- nvmf/common.sh@447 -- # tail -n +2 00:13:07.072 11:36:36 -- nvmf/common.sh@447 -- # head -n 1 00:13:07.072 11:36:36 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:07.072 11:36:36 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:07.072 11:36:36 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:07.072 11:36:36 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:07.072 11:36:36 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:07.072 11:36:36 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:07.072 11:36:36 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:07.072 11:36:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:07.072 11:36:36 -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:07.072 11:36:36 -- common/autotest_common.sh@10 -- # set +x 00:13:07.072 11:36:36 -- nvmf/common.sh@470 -- # nvmfpid=2995527 00:13:07.072 11:36:36 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:07.072 11:36:36 -- nvmf/common.sh@471 -- # waitforlisten 2995527 00:13:07.072 11:36:36 -- common/autotest_common.sh@827 -- # '[' -z 2995527 ']' 00:13:07.072 11:36:36 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.072 11:36:36 -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:07.072 11:36:36 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.072 11:36:36 -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:07.072 11:36:36 -- common/autotest_common.sh@10 -- # set +x 00:13:07.072 [2024-05-15 11:36:37.021627] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:13:07.072 [2024-05-15 11:36:37.021683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.072 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.072 [2024-05-15 11:36:37.094042] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.072 [2024-05-15 11:36:37.187567] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.072 [2024-05-15 11:36:37.187611] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.072 [2024-05-15 11:36:37.187620] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.072 [2024-05-15 11:36:37.187629] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.072 [2024-05-15 11:36:37.187636] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.072 [2024-05-15 11:36:37.187684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.072 [2024-05-15 11:36:37.187774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.072 [2024-05-15 11:36:37.187857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.072 [2024-05-15 11:36:37.187858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.331 11:36:37 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:07.331 11:36:37 -- common/autotest_common.sh@860 -- # return 0 00:13:07.331 11:36:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:07.331 11:36:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:07.331 11:36:37 -- common/autotest_common.sh@10 -- # set +x 00:13:07.331 11:36:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.331 11:36:37 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:07.331 11:36:37 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.331 11:36:37 -- common/autotest_common.sh@10 -- # set +x 00:13:07.331 11:36:37 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.331 11:36:37 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:07.331 11:36:37 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.331 11:36:37 -- common/autotest_common.sh@10 -- # set +x 00:13:07.331 11:36:37 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.331 11:36:37 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:07.331 11:36:37 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.331 11:36:37 -- common/autotest_common.sh@10 -- # set +x 00:13:07.331 [2024-05-15 11:36:37.989600] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1875f00/0x187a3f0) succeed. 00:13:07.331 [2024-05-15 11:36:37.999893] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1877540/0x18bba80) succeed. 00:13:07.591 11:36:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.591 11:36:38 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:07.591 11:36:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.591 11:36:38 -- common/autotest_common.sh@10 -- # set +x 00:13:07.591 Malloc0 00:13:07.591 11:36:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.591 11:36:38 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:07.591 11:36:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.591 11:36:38 -- common/autotest_common.sh@10 -- # set +x 00:13:07.591 11:36:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.591 11:36:38 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:07.591 11:36:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.591 11:36:38 -- common/autotest_common.sh@10 -- # set +x 00:13:07.591 11:36:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.591 11:36:38 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:07.591 11:36:38 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.591 11:36:38 -- common/autotest_common.sh@10 -- # set +x 00:13:07.591 [2024-05-15 11:36:38.193436] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:07.591 [2024-05-15 11:36:38.193814] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:07.591 11:36:38 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.591 11:36:38 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2995731 00:13:07.591 11:36:38 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:07.591 11:36:38 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:07.591 11:36:38 -- target/bdev_io_wait.sh@30 -- # READ_PID=2995733 00:13:07.591 11:36:38 -- nvmf/common.sh@521 -- # config=() 00:13:07.591 11:36:38 -- nvmf/common.sh@521 -- # local subsystem config 00:13:07.591 11:36:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:07.591 11:36:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:07.591 { 00:13:07.591 "params": { 00:13:07.591 "name": "Nvme$subsystem", 00:13:07.591 "trtype": "$TEST_TRANSPORT", 00:13:07.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:07.591 "adrfam": "ipv4", 00:13:07.591 "trsvcid": "$NVMF_PORT", 00:13:07.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:07.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:07.591 "hdgst": ${hdgst:-false}, 00:13:07.591 "ddgst": ${ddgst:-false} 00:13:07.591 }, 00:13:07.591 "method": "bdev_nvme_attach_controller" 00:13:07.591 } 00:13:07.591 EOF 00:13:07.591 )") 00:13:07.591 11:36:38 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:07.591 11:36:38 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:07.592 11:36:38 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2995735 00:13:07.592 11:36:38 -- nvmf/common.sh@521 -- # config=() 00:13:07.592 11:36:38 -- nvmf/common.sh@521 -- # local subsystem config 00:13:07.592 11:36:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:07.592 11:36:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:07.592 { 00:13:07.592 "params": { 00:13:07.592 "name": "Nvme$subsystem", 00:13:07.592 "trtype": "$TEST_TRANSPORT", 00:13:07.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:07.592 "adrfam": "ipv4", 00:13:07.592 "trsvcid": "$NVMF_PORT", 00:13:07.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:07.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:07.592 "hdgst": ${hdgst:-false}, 00:13:07.592 "ddgst": ${ddgst:-false} 00:13:07.592 }, 00:13:07.592 "method": "bdev_nvme_attach_controller" 00:13:07.592 } 00:13:07.592 EOF 00:13:07.592 )") 00:13:07.592 11:36:38 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:07.592 11:36:38 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:07.592 11:36:38 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2995738 00:13:07.592 11:36:38 -- nvmf/common.sh@543 -- # cat 00:13:07.592 11:36:38 -- target/bdev_io_wait.sh@35 -- # sync 00:13:07.592 11:36:38 -- nvmf/common.sh@521 -- # config=() 00:13:07.592 11:36:38 -- nvmf/common.sh@521 -- # local subsystem config 00:13:07.592 11:36:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:07.592 11:36:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:07.592 { 00:13:07.592 "params": { 00:13:07.592 "name": "Nvme$subsystem", 00:13:07.592 "trtype": "$TEST_TRANSPORT", 00:13:07.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:07.592 "adrfam": "ipv4", 00:13:07.592 "trsvcid": "$NVMF_PORT", 00:13:07.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:07.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:07.592 "hdgst": ${hdgst:-false}, 00:13:07.592 "ddgst": ${ddgst:-false} 00:13:07.592 }, 00:13:07.592 "method": "bdev_nvme_attach_controller" 00:13:07.592 } 00:13:07.592 EOF 00:13:07.592 )") 00:13:07.592 11:36:38 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:07.592 11:36:38 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:07.592 11:36:38 -- nvmf/common.sh@543 -- # cat 00:13:07.592 11:36:38 -- nvmf/common.sh@521 -- # config=() 00:13:07.592 11:36:38 -- nvmf/common.sh@521 -- # local subsystem config 00:13:07.592 11:36:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:07.592 11:36:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:07.592 { 00:13:07.592 "params": { 00:13:07.592 "name": "Nvme$subsystem", 00:13:07.592 "trtype": "$TEST_TRANSPORT", 00:13:07.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:07.592 "adrfam": "ipv4", 00:13:07.592 "trsvcid": "$NVMF_PORT", 00:13:07.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:07.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:07.592 "hdgst": ${hdgst:-false}, 00:13:07.592 "ddgst": ${ddgst:-false} 00:13:07.592 }, 00:13:07.592 "method": "bdev_nvme_attach_controller" 00:13:07.592 } 00:13:07.592 EOF 00:13:07.592 )") 00:13:07.592 11:36:38 -- nvmf/common.sh@543 -- # cat 00:13:07.592 11:36:38 -- target/bdev_io_wait.sh@37 -- # wait 2995731 00:13:07.592 11:36:38 -- nvmf/common.sh@543 -- # cat 00:13:07.592 11:36:38 -- nvmf/common.sh@545 -- # jq . 00:13:07.592 11:36:38 -- nvmf/common.sh@545 -- # jq . 00:13:07.592 11:36:38 -- nvmf/common.sh@545 -- # jq . 00:13:07.592 11:36:38 -- nvmf/common.sh@546 -- # IFS=, 00:13:07.592 11:36:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:07.592 "params": { 00:13:07.592 "name": "Nvme1", 00:13:07.592 "trtype": "rdma", 00:13:07.592 "traddr": "192.168.100.8", 00:13:07.592 "adrfam": "ipv4", 00:13:07.592 "trsvcid": "4420", 00:13:07.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:07.592 "hdgst": false, 00:13:07.592 "ddgst": false 00:13:07.592 }, 00:13:07.592 "method": "bdev_nvme_attach_controller" 00:13:07.592 }' 00:13:07.592 11:36:38 -- nvmf/common.sh@545 -- # jq . 00:13:07.592 11:36:38 -- nvmf/common.sh@546 -- # IFS=, 00:13:07.592 11:36:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:07.592 "params": { 00:13:07.592 "name": "Nvme1", 00:13:07.592 "trtype": "rdma", 00:13:07.592 "traddr": "192.168.100.8", 00:13:07.592 "adrfam": "ipv4", 00:13:07.592 "trsvcid": "4420", 00:13:07.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:07.592 "hdgst": false, 00:13:07.592 "ddgst": false 00:13:07.592 }, 00:13:07.592 "method": "bdev_nvme_attach_controller" 00:13:07.592 }' 00:13:07.592 11:36:38 -- nvmf/common.sh@546 -- # IFS=, 00:13:07.592 11:36:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:07.592 "params": { 00:13:07.592 "name": "Nvme1", 00:13:07.592 "trtype": "rdma", 00:13:07.592 "traddr": "192.168.100.8", 00:13:07.592 "adrfam": "ipv4", 00:13:07.592 "trsvcid": "4420", 00:13:07.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:07.592 "hdgst": false, 00:13:07.592 "ddgst": false 00:13:07.592 }, 00:13:07.592 "method": "bdev_nvme_attach_controller" 00:13:07.592 }' 00:13:07.592 11:36:38 -- nvmf/common.sh@546 -- # IFS=, 00:13:07.592 11:36:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:07.592 "params": { 00:13:07.592 "name": "Nvme1", 00:13:07.592 "trtype": "rdma", 00:13:07.592 "traddr": "192.168.100.8", 00:13:07.592 "adrfam": "ipv4", 00:13:07.592 "trsvcid": "4420", 00:13:07.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:07.592 "hdgst": false, 00:13:07.592 "ddgst": false 00:13:07.592 }, 00:13:07.592 "method": "bdev_nvme_attach_controller" 00:13:07.592 }' 00:13:07.592 [2024-05-15 11:36:38.248259] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:13:07.592 [2024-05-15 11:36:38.248260] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:13:07.592 [2024-05-15 11:36:38.248334] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 11:36:38.248335] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:07.592 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:07.592 [2024-05-15 11:36:38.252839] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:13:07.592 [2024-05-15 11:36:38.252894] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:07.592 [2024-05-15 11:36:38.259729] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:13:07.592 [2024-05-15 11:36:38.259809] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:07.592 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.851 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.851 [2024-05-15 11:36:38.449453] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.851 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.851 [2024-05-15 11:36:38.532048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:07.851 [2024-05-15 11:36:38.553837] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.108 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.108 [2024-05-15 11:36:38.635707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:08.109 [2024-05-15 11:36:38.659969] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.109 [2024-05-15 11:36:38.722106] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.109 [2024-05-15 11:36:38.747034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:08.109 [2024-05-15 11:36:38.802978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:13:08.109 Running I/O for 1 seconds... 00:13:08.109 Running I/O for 1 seconds... 00:13:08.367 Running I/O for 1 seconds... 00:13:08.367 Running I/O for 1 seconds... 00:13:09.304 00:13:09.304 Latency(us) 00:13:09.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.304 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:09.304 Nvme1n1 : 1.00 263305.60 1028.54 0.00 0.00 483.96 199.46 1880.60 00:13:09.304 =================================================================================================================== 00:13:09.304 Total : 263305.60 1028.54 0.00 0.00 483.96 199.46 1880.60 00:13:09.304 00:13:09.304 Latency(us) 00:13:09.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.304 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:09.304 Nvme1n1 : 1.01 17603.46 68.76 0.00 0.00 7248.15 4331.07 14019.01 00:13:09.304 =================================================================================================================== 00:13:09.304 Total : 17603.46 68.76 0.00 0.00 7248.15 4331.07 14019.01 00:13:09.304 00:13:09.304 Latency(us) 00:13:09.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.304 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:09.304 Nvme1n1 : 1.01 15063.95 58.84 0.00 0.00 8471.08 5185.89 18578.03 00:13:09.304 =================================================================================================================== 00:13:09.304 Total : 15063.95 58.84 0.00 0.00 8471.08 5185.89 18578.03 00:13:09.304 00:13:09.304 Latency(us) 00:13:09.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.304 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:09.304 Nvme1n1 : 1.00 18111.46 70.75 0.00 0.00 7052.52 3405.02 18464.06 00:13:09.304 =================================================================================================================== 00:13:09.304 Total : 18111.46 70.75 0.00 0.00 7052.52 3405.02 18464.06 00:13:09.561 11:36:40 -- target/bdev_io_wait.sh@38 -- # wait 2995733 00:13:09.561 11:36:40 -- target/bdev_io_wait.sh@39 -- # wait 2995735 00:13:09.561 11:36:40 -- target/bdev_io_wait.sh@40 -- # wait 2995738 00:13:09.561 11:36:40 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.561 11:36:40 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.561 11:36:40 -- common/autotest_common.sh@10 -- # set +x 00:13:09.561 11:36:40 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.561 11:36:40 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:09.561 11:36:40 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:09.561 11:36:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:09.561 11:36:40 -- nvmf/common.sh@117 -- # sync 00:13:09.561 11:36:40 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:09.561 11:36:40 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:09.561 11:36:40 -- nvmf/common.sh@120 -- # set +e 00:13:09.561 11:36:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:09.561 11:36:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:09.561 rmmod nvme_rdma 00:13:09.561 rmmod nvme_fabrics 00:13:09.820 11:36:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:09.820 11:36:40 -- nvmf/common.sh@124 -- # set -e 00:13:09.820 11:36:40 -- nvmf/common.sh@125 -- # return 0 00:13:09.820 11:36:40 -- nvmf/common.sh@478 -- # '[' -n 2995527 ']' 00:13:09.820 11:36:40 -- nvmf/common.sh@479 -- # killprocess 2995527 00:13:09.820 11:36:40 -- common/autotest_common.sh@946 -- # '[' -z 2995527 ']' 00:13:09.820 11:36:40 -- common/autotest_common.sh@950 -- # kill -0 2995527 00:13:09.820 11:36:40 -- common/autotest_common.sh@951 -- # uname 00:13:09.820 11:36:40 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:09.820 11:36:40 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2995527 00:13:09.820 11:36:40 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:09.820 11:36:40 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:09.820 11:36:40 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2995527' 00:13:09.820 killing process with pid 2995527 00:13:09.820 11:36:40 -- common/autotest_common.sh@965 -- # kill 2995527 00:13:09.820 [2024-05-15 11:36:40.399634] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:09.820 11:36:40 -- common/autotest_common.sh@970 -- # wait 2995527 00:13:09.820 [2024-05-15 11:36:40.481126] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:13:10.079 11:36:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:10.079 11:36:40 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:10.079 00:13:10.079 real 0m9.718s 00:13:10.079 user 0m21.427s 00:13:10.079 sys 0m5.911s 00:13:10.079 11:36:40 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:10.079 11:36:40 -- common/autotest_common.sh@10 -- # set +x 00:13:10.079 ************************************ 00:13:10.079 END TEST nvmf_bdev_io_wait 00:13:10.079 ************************************ 00:13:10.079 11:36:40 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:13:10.079 11:36:40 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:10.079 11:36:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:10.079 11:36:40 -- common/autotest_common.sh@10 -- # set +x 00:13:10.079 ************************************ 00:13:10.079 START TEST nvmf_queue_depth 00:13:10.079 ************************************ 00:13:10.079 11:36:40 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:13:10.338 * Looking for test storage... 00:13:10.338 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:10.338 11:36:40 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.338 11:36:40 -- nvmf/common.sh@7 -- # uname -s 00:13:10.338 11:36:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.338 11:36:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.338 11:36:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.338 11:36:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.338 11:36:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.338 11:36:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.338 11:36:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.338 11:36:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.338 11:36:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.338 11:36:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.338 11:36:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:13:10.338 11:36:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:13:10.338 11:36:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.338 11:36:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.338 11:36:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.339 11:36:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.339 11:36:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:10.339 11:36:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.339 11:36:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.339 11:36:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.339 11:36:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.339 11:36:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.339 11:36:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.339 11:36:40 -- paths/export.sh@5 -- # export PATH 00:13:10.339 11:36:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.339 11:36:40 -- nvmf/common.sh@47 -- # : 0 00:13:10.339 11:36:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:10.339 11:36:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:10.339 11:36:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.339 11:36:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.339 11:36:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.339 11:36:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:10.339 11:36:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:10.339 11:36:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:10.339 11:36:40 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:10.339 11:36:40 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:10.339 11:36:40 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:10.339 11:36:40 -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:10.339 11:36:40 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:10.339 11:36:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.339 11:36:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:10.339 11:36:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:10.339 11:36:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:10.339 11:36:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.339 11:36:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.339 11:36:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.339 11:36:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:10.339 11:36:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:10.339 11:36:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:10.339 11:36:40 -- common/autotest_common.sh@10 -- # set +x 00:13:16.908 11:36:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:16.908 11:36:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:16.908 11:36:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:16.908 11:36:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:16.908 11:36:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:16.908 11:36:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:16.908 11:36:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:16.908 11:36:46 -- nvmf/common.sh@295 -- # net_devs=() 00:13:16.908 11:36:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:16.908 11:36:46 -- nvmf/common.sh@296 -- # e810=() 00:13:16.908 11:36:46 -- nvmf/common.sh@296 -- # local -ga e810 00:13:16.908 11:36:46 -- nvmf/common.sh@297 -- # x722=() 00:13:16.908 11:36:46 -- nvmf/common.sh@297 -- # local -ga x722 00:13:16.908 11:36:46 -- nvmf/common.sh@298 -- # mlx=() 00:13:16.908 11:36:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:16.908 11:36:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.908 11:36:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.908 11:36:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.908 11:36:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.908 11:36:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.908 11:36:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.908 11:36:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.908 11:36:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.908 11:36:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.908 11:36:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.908 11:36:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.908 11:36:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:16.908 11:36:46 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:16.908 11:36:46 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:16.909 11:36:46 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:16.909 11:36:46 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:16.909 11:36:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:16.909 11:36:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.909 11:36:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:16.909 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:16.909 11:36:46 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:16.909 11:36:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.909 11:36:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:16.909 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:16.909 11:36:46 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:16.909 11:36:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:16.909 11:36:46 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.909 11:36:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.909 11:36:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:16.909 11:36:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.909 11:36:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:16.909 Found net devices under 0000:18:00.0: mlx_0_0 00:13:16.909 11:36:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.909 11:36:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.909 11:36:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.909 11:36:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:16.909 11:36:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.909 11:36:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:16.909 Found net devices under 0000:18:00.1: mlx_0_1 00:13:16.909 11:36:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.909 11:36:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:16.909 11:36:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:16.909 11:36:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:16.909 11:36:46 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:16.909 11:36:46 -- nvmf/common.sh@58 -- # uname 00:13:16.909 11:36:46 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:16.909 11:36:46 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:16.909 11:36:46 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:16.909 11:36:46 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:16.909 11:36:46 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:16.909 11:36:46 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:16.909 11:36:46 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:16.909 11:36:46 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:16.909 11:36:46 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:16.909 11:36:46 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:16.909 11:36:46 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:16.909 11:36:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:16.909 11:36:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:16.909 11:36:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:16.909 11:36:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:16.909 11:36:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:16.909 11:36:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:16.909 11:36:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:16.909 11:36:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:16.909 11:36:46 -- nvmf/common.sh@105 -- # continue 2 00:13:16.909 11:36:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:16.909 11:36:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:16.909 11:36:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:16.909 11:36:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:16.909 11:36:46 -- nvmf/common.sh@105 -- # continue 2 00:13:16.909 11:36:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:16.909 11:36:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:16.909 11:36:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:16.909 11:36:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:16.909 11:36:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:16.909 11:36:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:16.909 11:36:46 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:16.909 11:36:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:16.909 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:16.909 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:13:16.909 altname enp24s0f0np0 00:13:16.909 altname ens785f0np0 00:13:16.909 inet 192.168.100.8/24 scope global mlx_0_0 00:13:16.909 valid_lft forever preferred_lft forever 00:13:16.909 11:36:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:16.909 11:36:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:16.909 11:36:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:16.909 11:36:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:16.909 11:36:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:16.909 11:36:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:16.909 11:36:46 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:16.909 11:36:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:16.909 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:16.909 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:13:16.909 altname enp24s0f1np1 00:13:16.909 altname ens785f1np1 00:13:16.909 inet 192.168.100.9/24 scope global mlx_0_1 00:13:16.909 valid_lft forever preferred_lft forever 00:13:16.909 11:36:46 -- nvmf/common.sh@411 -- # return 0 00:13:16.909 11:36:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:16.909 11:36:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:16.909 11:36:46 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:16.909 11:36:46 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:16.909 11:36:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:16.909 11:36:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:16.909 11:36:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:16.909 11:36:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:16.909 11:36:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:16.909 11:36:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:16.909 11:36:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:16.909 11:36:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:16.909 11:36:46 -- nvmf/common.sh@105 -- # continue 2 00:13:16.909 11:36:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:16.909 11:36:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:16.909 11:36:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:16.909 11:36:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:16.909 11:36:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:16.909 11:36:46 -- nvmf/common.sh@105 -- # continue 2 00:13:16.909 11:36:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:16.909 11:36:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:16.909 11:36:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:16.909 11:36:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:16.909 11:36:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:16.909 11:36:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:16.909 11:36:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:16.909 11:36:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:16.909 11:36:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:16.909 11:36:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:16.909 11:36:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:16.909 11:36:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:16.909 11:36:46 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:16.909 192.168.100.9' 00:13:16.909 11:36:46 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:16.909 192.168.100.9' 00:13:16.909 11:36:46 -- nvmf/common.sh@446 -- # head -n 1 00:13:16.909 11:36:46 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:16.909 11:36:46 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:16.909 192.168.100.9' 00:13:16.909 11:36:46 -- nvmf/common.sh@447 -- # tail -n +2 00:13:16.909 11:36:46 -- nvmf/common.sh@447 -- # head -n 1 00:13:16.909 11:36:46 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:16.909 11:36:46 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:16.909 11:36:46 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:16.909 11:36:46 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:16.909 11:36:46 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:16.909 11:36:46 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:16.909 11:36:46 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:16.909 11:36:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:16.909 11:36:46 -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:16.909 11:36:46 -- common/autotest_common.sh@10 -- # set +x 00:13:16.910 11:36:46 -- nvmf/common.sh@470 -- # nvmfpid=2998846 00:13:16.910 11:36:46 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:16.910 11:36:46 -- nvmf/common.sh@471 -- # waitforlisten 2998846 00:13:16.910 11:36:46 -- common/autotest_common.sh@827 -- # '[' -z 2998846 ']' 00:13:16.910 11:36:46 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.910 11:36:46 -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:16.910 11:36:46 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.910 11:36:46 -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:16.910 11:36:46 -- common/autotest_common.sh@10 -- # set +x 00:13:16.910 [2024-05-15 11:36:46.854323] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:13:16.910 [2024-05-15 11:36:46.854387] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.910 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.910 [2024-05-15 11:36:46.925359] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.910 [2024-05-15 11:36:47.012677] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.910 [2024-05-15 11:36:47.012729] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.910 [2024-05-15 11:36:47.012739] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.910 [2024-05-15 11:36:47.012747] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.910 [2024-05-15 11:36:47.012756] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.910 [2024-05-15 11:36:47.012783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.910 11:36:47 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:16.910 11:36:47 -- common/autotest_common.sh@860 -- # return 0 00:13:16.910 11:36:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:16.910 11:36:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:16.910 11:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:17.170 11:36:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.170 11:36:47 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:17.170 11:36:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.170 11:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:17.170 [2024-05-15 11:36:47.728220] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16b40b0/0x16b85a0) succeed. 00:13:17.170 [2024-05-15 11:36:47.737397] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16b55b0/0x16f9c30) succeed. 00:13:17.170 11:36:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.170 11:36:47 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:17.170 11:36:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.170 11:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:17.170 Malloc0 00:13:17.170 11:36:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.170 11:36:47 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:17.170 11:36:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.170 11:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:17.170 11:36:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.170 11:36:47 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:17.170 11:36:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.170 11:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:17.170 11:36:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.170 11:36:47 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:17.170 11:36:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.170 11:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:17.170 [2024-05-15 11:36:47.822045] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:17.170 [2024-05-15 11:36:47.822420] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:17.170 11:36:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.170 11:36:47 -- target/queue_depth.sh@30 -- # bdevperf_pid=2999044 00:13:17.170 11:36:47 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:17.170 11:36:47 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:17.170 11:36:47 -- target/queue_depth.sh@33 -- # waitforlisten 2999044 /var/tmp/bdevperf.sock 00:13:17.170 11:36:47 -- common/autotest_common.sh@827 -- # '[' -z 2999044 ']' 00:13:17.170 11:36:47 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:17.170 11:36:47 -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:17.170 11:36:47 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:17.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:17.170 11:36:47 -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:17.170 11:36:47 -- common/autotest_common.sh@10 -- # set +x 00:13:17.170 [2024-05-15 11:36:47.872146] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:13:17.170 [2024-05-15 11:36:47.872211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999044 ] 00:13:17.170 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.429 [2024-05-15 11:36:47.944349] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.429 [2024-05-15 11:36:48.041204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.998 11:36:48 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:17.998 11:36:48 -- common/autotest_common.sh@860 -- # return 0 00:13:17.998 11:36:48 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:17.998 11:36:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.998 11:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:18.257 NVMe0n1 00:13:18.257 11:36:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.257 11:36:48 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:18.257 Running I/O for 10 seconds... 00:13:28.238 00:13:28.238 Latency(us) 00:13:28.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.238 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:28.238 Verification LBA range: start 0x0 length 0x4000 00:13:28.238 NVMe0n1 : 10.04 17810.63 69.57 0.00 0.00 57324.65 14930.81 39663.53 00:13:28.238 =================================================================================================================== 00:13:28.238 Total : 17810.63 69.57 0.00 0.00 57324.65 14930.81 39663.53 00:13:28.238 0 00:13:28.238 11:36:58 -- target/queue_depth.sh@39 -- # killprocess 2999044 00:13:28.238 11:36:58 -- common/autotest_common.sh@946 -- # '[' -z 2999044 ']' 00:13:28.238 11:36:58 -- common/autotest_common.sh@950 -- # kill -0 2999044 00:13:28.238 11:36:58 -- common/autotest_common.sh@951 -- # uname 00:13:28.238 11:36:58 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:28.238 11:36:58 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2999044 00:13:28.497 11:36:59 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:28.497 11:36:59 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:28.497 11:36:59 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2999044' 00:13:28.497 killing process with pid 2999044 00:13:28.497 11:36:59 -- common/autotest_common.sh@965 -- # kill 2999044 00:13:28.497 Received shutdown signal, test time was about 10.000000 seconds 00:13:28.497 00:13:28.497 Latency(us) 00:13:28.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.497 =================================================================================================================== 00:13:28.497 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:28.497 11:36:59 -- common/autotest_common.sh@970 -- # wait 2999044 00:13:28.497 11:36:59 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:28.497 11:36:59 -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:28.497 11:36:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:28.497 11:36:59 -- nvmf/common.sh@117 -- # sync 00:13:28.497 11:36:59 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:28.497 11:36:59 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:28.497 11:36:59 -- nvmf/common.sh@120 -- # set +e 00:13:28.497 11:36:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:28.497 11:36:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:28.756 rmmod nvme_rdma 00:13:28.756 rmmod nvme_fabrics 00:13:28.756 11:36:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:28.756 11:36:59 -- nvmf/common.sh@124 -- # set -e 00:13:28.756 11:36:59 -- nvmf/common.sh@125 -- # return 0 00:13:28.756 11:36:59 -- nvmf/common.sh@478 -- # '[' -n 2998846 ']' 00:13:28.756 11:36:59 -- nvmf/common.sh@479 -- # killprocess 2998846 00:13:28.756 11:36:59 -- common/autotest_common.sh@946 -- # '[' -z 2998846 ']' 00:13:28.756 11:36:59 -- common/autotest_common.sh@950 -- # kill -0 2998846 00:13:28.756 11:36:59 -- common/autotest_common.sh@951 -- # uname 00:13:28.756 11:36:59 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:28.756 11:36:59 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2998846 00:13:28.756 11:36:59 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:28.756 11:36:59 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:28.757 11:36:59 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2998846' 00:13:28.757 killing process with pid 2998846 00:13:28.757 11:36:59 -- common/autotest_common.sh@965 -- # kill 2998846 00:13:28.757 [2024-05-15 11:36:59.355167] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:28.757 11:36:59 -- common/autotest_common.sh@970 -- # wait 2998846 00:13:28.757 [2024-05-15 11:36:59.399150] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:13:29.016 11:36:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:29.016 11:36:59 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:29.016 00:13:29.016 real 0m18.880s 00:13:29.016 user 0m26.132s 00:13:29.016 sys 0m5.251s 00:13:29.016 11:36:59 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:29.016 11:36:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.016 ************************************ 00:13:29.016 END TEST nvmf_queue_depth 00:13:29.016 ************************************ 00:13:29.016 11:36:59 -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:13:29.016 11:36:59 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:29.016 11:36:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:29.016 11:36:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.016 ************************************ 00:13:29.016 START TEST nvmf_target_multipath 00:13:29.016 ************************************ 00:13:29.016 11:36:59 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:13:29.276 * Looking for test storage... 00:13:29.276 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:29.276 11:36:59 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.276 11:36:59 -- nvmf/common.sh@7 -- # uname -s 00:13:29.276 11:36:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.276 11:36:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.276 11:36:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.276 11:36:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.276 11:36:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.276 11:36:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.276 11:36:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.276 11:36:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.276 11:36:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.276 11:36:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.276 11:36:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:13:29.276 11:36:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:13:29.276 11:36:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.276 11:36:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.276 11:36:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.276 11:36:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.276 11:36:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:29.276 11:36:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.276 11:36:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.276 11:36:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.276 11:36:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.276 11:36:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.276 11:36:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.276 11:36:59 -- paths/export.sh@5 -- # export PATH 00:13:29.276 11:36:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.276 11:36:59 -- nvmf/common.sh@47 -- # : 0 00:13:29.276 11:36:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:29.276 11:36:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:29.276 11:36:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.276 11:36:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.276 11:36:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.276 11:36:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:29.276 11:36:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:29.276 11:36:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:29.276 11:36:59 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:29.276 11:36:59 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:29.276 11:36:59 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:29.276 11:36:59 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:29.276 11:36:59 -- target/multipath.sh@43 -- # nvmftestinit 00:13:29.276 11:36:59 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:29.276 11:36:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.276 11:36:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:29.276 11:36:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:29.276 11:36:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:29.276 11:36:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.276 11:36:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.276 11:36:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.276 11:36:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:29.276 11:36:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:29.276 11:36:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:29.276 11:36:59 -- common/autotest_common.sh@10 -- # set +x 00:13:35.848 11:37:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:35.848 11:37:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:35.848 11:37:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:35.848 11:37:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:35.848 11:37:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:35.848 11:37:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:35.848 11:37:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:35.848 11:37:05 -- nvmf/common.sh@295 -- # net_devs=() 00:13:35.848 11:37:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:35.848 11:37:05 -- nvmf/common.sh@296 -- # e810=() 00:13:35.848 11:37:05 -- nvmf/common.sh@296 -- # local -ga e810 00:13:35.848 11:37:05 -- nvmf/common.sh@297 -- # x722=() 00:13:35.848 11:37:05 -- nvmf/common.sh@297 -- # local -ga x722 00:13:35.848 11:37:05 -- nvmf/common.sh@298 -- # mlx=() 00:13:35.848 11:37:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:35.848 11:37:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:35.848 11:37:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:35.848 11:37:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:35.848 11:37:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:35.848 11:37:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:35.848 11:37:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:35.848 11:37:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:35.848 11:37:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:35.848 11:37:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:35.848 11:37:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:35.848 11:37:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:35.848 11:37:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:35.848 11:37:05 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:35.848 11:37:05 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:35.848 11:37:05 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:35.848 11:37:05 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:35.848 11:37:05 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:35.848 11:37:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:35.848 11:37:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:35.848 11:37:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:35.848 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:35.848 11:37:05 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:35.848 11:37:05 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:35.848 11:37:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:35.848 11:37:05 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:35.848 11:37:05 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:35.848 11:37:05 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:35.848 11:37:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:35.848 11:37:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:35.848 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:35.849 11:37:05 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:35.849 11:37:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:35.849 11:37:05 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:35.849 11:37:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.849 11:37:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:35.849 11:37:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.849 11:37:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:35.849 Found net devices under 0000:18:00.0: mlx_0_0 00:13:35.849 11:37:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.849 11:37:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:35.849 11:37:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.849 11:37:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:35.849 11:37:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.849 11:37:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:35.849 Found net devices under 0000:18:00.1: mlx_0_1 00:13:35.849 11:37:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.849 11:37:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:35.849 11:37:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:35.849 11:37:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:35.849 11:37:05 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:35.849 11:37:05 -- nvmf/common.sh@58 -- # uname 00:13:35.849 11:37:05 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:35.849 11:37:05 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:35.849 11:37:05 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:35.849 11:37:05 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:35.849 11:37:05 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:35.849 11:37:05 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:35.849 11:37:05 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:35.849 11:37:05 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:35.849 11:37:05 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:35.849 11:37:05 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:35.849 11:37:05 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:35.849 11:37:05 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:35.849 11:37:05 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:35.849 11:37:05 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:35.849 11:37:05 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:35.849 11:37:05 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:35.849 11:37:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:35.849 11:37:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:35.849 11:37:05 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:35.849 11:37:05 -- nvmf/common.sh@105 -- # continue 2 00:13:35.849 11:37:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:35.849 11:37:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:35.849 11:37:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:35.849 11:37:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:35.849 11:37:05 -- nvmf/common.sh@105 -- # continue 2 00:13:35.849 11:37:05 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:35.849 11:37:05 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:35.849 11:37:05 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:35.849 11:37:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:35.849 11:37:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:35.849 11:37:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:35.849 11:37:05 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:35.849 11:37:05 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:35.849 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:35.849 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:13:35.849 altname enp24s0f0np0 00:13:35.849 altname ens785f0np0 00:13:35.849 inet 192.168.100.8/24 scope global mlx_0_0 00:13:35.849 valid_lft forever preferred_lft forever 00:13:35.849 11:37:05 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:35.849 11:37:05 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:35.849 11:37:05 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:35.849 11:37:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:35.849 11:37:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:35.849 11:37:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:35.849 11:37:05 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:35.849 11:37:05 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:35.849 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:35.849 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:13:35.849 altname enp24s0f1np1 00:13:35.849 altname ens785f1np1 00:13:35.849 inet 192.168.100.9/24 scope global mlx_0_1 00:13:35.849 valid_lft forever preferred_lft forever 00:13:35.849 11:37:05 -- nvmf/common.sh@411 -- # return 0 00:13:35.849 11:37:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:35.849 11:37:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:35.849 11:37:05 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:35.849 11:37:05 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:35.849 11:37:05 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:35.849 11:37:05 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:35.849 11:37:05 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:35.849 11:37:05 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:35.849 11:37:05 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:35.849 11:37:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:35.849 11:37:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:35.849 11:37:05 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:35.849 11:37:05 -- nvmf/common.sh@105 -- # continue 2 00:13:35.849 11:37:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:35.849 11:37:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:35.849 11:37:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:35.849 11:37:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:35.849 11:37:05 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:35.849 11:37:05 -- nvmf/common.sh@105 -- # continue 2 00:13:35.849 11:37:05 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:35.849 11:37:05 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:35.849 11:37:05 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:35.849 11:37:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:35.849 11:37:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:35.849 11:37:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:35.849 11:37:05 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:35.849 11:37:05 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:35.849 11:37:05 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:35.849 11:37:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:35.849 11:37:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:35.849 11:37:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:35.849 11:37:05 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:35.849 192.168.100.9' 00:13:35.849 11:37:05 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:35.849 192.168.100.9' 00:13:35.849 11:37:05 -- nvmf/common.sh@446 -- # head -n 1 00:13:35.849 11:37:05 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:35.849 11:37:05 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:35.849 192.168.100.9' 00:13:35.849 11:37:05 -- nvmf/common.sh@447 -- # tail -n +2 00:13:35.849 11:37:05 -- nvmf/common.sh@447 -- # head -n 1 00:13:35.849 11:37:05 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:35.849 11:37:05 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:35.849 11:37:05 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:35.849 11:37:05 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:35.849 11:37:05 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:35.849 11:37:05 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:35.849 11:37:05 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:13:35.849 11:37:05 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:13:35.849 11:37:05 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:13:35.849 run this test only with TCP transport for now 00:13:35.849 11:37:05 -- target/multipath.sh@53 -- # nvmftestfini 00:13:35.849 11:37:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:35.849 11:37:05 -- nvmf/common.sh@117 -- # sync 00:13:35.849 11:37:05 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:35.849 11:37:05 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:35.849 11:37:05 -- nvmf/common.sh@120 -- # set +e 00:13:35.849 11:37:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:35.849 11:37:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:35.849 rmmod nvme_rdma 00:13:35.849 rmmod nvme_fabrics 00:13:35.849 11:37:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:35.849 11:37:05 -- nvmf/common.sh@124 -- # set -e 00:13:35.849 11:37:05 -- nvmf/common.sh@125 -- # return 0 00:13:35.849 11:37:05 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:13:35.849 11:37:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:35.849 11:37:05 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:35.849 11:37:05 -- target/multipath.sh@54 -- # exit 0 00:13:35.849 11:37:05 -- target/multipath.sh@1 -- # nvmftestfini 00:13:35.849 11:37:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:35.849 11:37:05 -- nvmf/common.sh@117 -- # sync 00:13:35.849 11:37:05 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:35.849 11:37:05 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:35.849 11:37:05 -- nvmf/common.sh@120 -- # set +e 00:13:35.849 11:37:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:35.850 11:37:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:35.850 11:37:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:35.850 11:37:05 -- nvmf/common.sh@124 -- # set -e 00:13:35.850 11:37:05 -- nvmf/common.sh@125 -- # return 0 00:13:35.850 11:37:05 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:13:35.850 11:37:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:35.850 11:37:05 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:35.850 00:13:35.850 real 0m6.242s 00:13:35.850 user 0m1.730s 00:13:35.850 sys 0m4.691s 00:13:35.850 11:37:05 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:35.850 11:37:05 -- common/autotest_common.sh@10 -- # set +x 00:13:35.850 ************************************ 00:13:35.850 END TEST nvmf_target_multipath 00:13:35.850 ************************************ 00:13:35.850 11:37:06 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:13:35.850 11:37:06 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:35.850 11:37:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:35.850 11:37:06 -- common/autotest_common.sh@10 -- # set +x 00:13:35.850 ************************************ 00:13:35.850 START TEST nvmf_zcopy 00:13:35.850 ************************************ 00:13:35.850 11:37:06 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:13:35.850 * Looking for test storage... 00:13:35.850 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:35.850 11:37:06 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.850 11:37:06 -- nvmf/common.sh@7 -- # uname -s 00:13:35.850 11:37:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.850 11:37:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.850 11:37:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.850 11:37:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.850 11:37:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.850 11:37:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.850 11:37:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.850 11:37:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.850 11:37:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.850 11:37:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.850 11:37:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:13:35.850 11:37:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:13:35.850 11:37:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.850 11:37:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.850 11:37:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.850 11:37:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.850 11:37:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:35.850 11:37:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.850 11:37:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.850 11:37:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.850 11:37:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.850 11:37:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.850 11:37:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.850 11:37:06 -- paths/export.sh@5 -- # export PATH 00:13:35.850 11:37:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.850 11:37:06 -- nvmf/common.sh@47 -- # : 0 00:13:35.850 11:37:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:35.850 11:37:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:35.850 11:37:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.850 11:37:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.850 11:37:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.850 11:37:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:35.850 11:37:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:35.850 11:37:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:35.850 11:37:06 -- target/zcopy.sh@12 -- # nvmftestinit 00:13:35.850 11:37:06 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:35.850 11:37:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.850 11:37:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:35.850 11:37:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:35.850 11:37:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:35.850 11:37:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.850 11:37:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.850 11:37:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.850 11:37:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:35.850 11:37:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:35.850 11:37:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:35.850 11:37:06 -- common/autotest_common.sh@10 -- # set +x 00:13:42.424 11:37:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:42.424 11:37:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:42.424 11:37:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:42.424 11:37:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:42.424 11:37:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:42.424 11:37:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:42.424 11:37:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:42.424 11:37:12 -- nvmf/common.sh@295 -- # net_devs=() 00:13:42.424 11:37:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:42.424 11:37:12 -- nvmf/common.sh@296 -- # e810=() 00:13:42.424 11:37:12 -- nvmf/common.sh@296 -- # local -ga e810 00:13:42.424 11:37:12 -- nvmf/common.sh@297 -- # x722=() 00:13:42.424 11:37:12 -- nvmf/common.sh@297 -- # local -ga x722 00:13:42.424 11:37:12 -- nvmf/common.sh@298 -- # mlx=() 00:13:42.424 11:37:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:42.424 11:37:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.424 11:37:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.424 11:37:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.424 11:37:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.424 11:37:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.424 11:37:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.424 11:37:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.424 11:37:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.424 11:37:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.424 11:37:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.424 11:37:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.424 11:37:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:42.424 11:37:12 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:42.424 11:37:12 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:42.424 11:37:12 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:42.424 11:37:12 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:42.424 11:37:12 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:42.424 11:37:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:42.424 11:37:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.424 11:37:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:42.424 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:42.424 11:37:12 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:42.424 11:37:12 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:42.424 11:37:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:42.424 11:37:12 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:42.424 11:37:12 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:42.424 11:37:12 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:42.424 11:37:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.424 11:37:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:42.424 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:42.424 11:37:12 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:42.424 11:37:12 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:42.424 11:37:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:42.424 11:37:12 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:42.424 11:37:12 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:42.424 11:37:12 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:42.424 11:37:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:42.424 11:37:12 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:42.424 11:37:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.424 11:37:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.424 11:37:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:42.424 11:37:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.424 11:37:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:42.424 Found net devices under 0000:18:00.0: mlx_0_0 00:13:42.424 11:37:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.424 11:37:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.424 11:37:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.424 11:37:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:42.424 11:37:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.424 11:37:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:42.424 Found net devices under 0000:18:00.1: mlx_0_1 00:13:42.424 11:37:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.424 11:37:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:42.424 11:37:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:42.424 11:37:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:42.424 11:37:12 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:42.424 11:37:12 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:42.424 11:37:12 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:42.424 11:37:12 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:42.424 11:37:12 -- nvmf/common.sh@58 -- # uname 00:13:42.424 11:37:12 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:42.424 11:37:12 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:42.424 11:37:12 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:42.424 11:37:12 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:42.424 11:37:12 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:42.424 11:37:12 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:42.424 11:37:12 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:42.424 11:37:12 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:42.424 11:37:12 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:42.424 11:37:12 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:42.424 11:37:12 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:42.425 11:37:12 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:42.425 11:37:12 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:42.425 11:37:12 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:42.425 11:37:12 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:42.425 11:37:12 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:42.425 11:37:12 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:42.425 11:37:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:42.425 11:37:12 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:42.425 11:37:12 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:42.425 11:37:12 -- nvmf/common.sh@105 -- # continue 2 00:13:42.425 11:37:12 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:42.425 11:37:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:42.425 11:37:12 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:42.425 11:37:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:42.425 11:37:12 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:42.425 11:37:12 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:42.425 11:37:12 -- nvmf/common.sh@105 -- # continue 2 00:13:42.425 11:37:12 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:42.425 11:37:12 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:42.425 11:37:12 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:42.425 11:37:12 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:42.425 11:37:12 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:42.425 11:37:12 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:42.425 11:37:12 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:42.425 11:37:12 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:42.425 11:37:12 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:42.425 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:42.425 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:13:42.425 altname enp24s0f0np0 00:13:42.425 altname ens785f0np0 00:13:42.425 inet 192.168.100.8/24 scope global mlx_0_0 00:13:42.425 valid_lft forever preferred_lft forever 00:13:42.425 11:37:12 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:42.425 11:37:12 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:42.425 11:37:12 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:42.425 11:37:12 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:42.425 11:37:12 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:42.425 11:37:12 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:42.425 11:37:12 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:42.425 11:37:12 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:42.425 11:37:12 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:42.425 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:42.425 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:13:42.425 altname enp24s0f1np1 00:13:42.425 altname ens785f1np1 00:13:42.425 inet 192.168.100.9/24 scope global mlx_0_1 00:13:42.425 valid_lft forever preferred_lft forever 00:13:42.425 11:37:12 -- nvmf/common.sh@411 -- # return 0 00:13:42.425 11:37:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:42.425 11:37:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:42.425 11:37:12 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:42.425 11:37:12 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:42.425 11:37:12 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:42.425 11:37:12 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:42.425 11:37:12 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:42.425 11:37:12 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:42.425 11:37:12 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:42.425 11:37:12 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:42.425 11:37:12 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:42.425 11:37:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:42.425 11:37:12 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:42.425 11:37:12 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:42.425 11:37:12 -- nvmf/common.sh@105 -- # continue 2 00:13:42.425 11:37:12 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:42.425 11:37:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:42.425 11:37:12 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:42.425 11:37:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:42.425 11:37:12 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:42.425 11:37:12 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:42.425 11:37:12 -- nvmf/common.sh@105 -- # continue 2 00:13:42.425 11:37:12 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:42.425 11:37:12 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:42.425 11:37:12 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:42.425 11:37:12 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:42.425 11:37:12 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:42.425 11:37:12 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:42.425 11:37:12 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:42.425 11:37:12 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:42.425 11:37:12 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:42.425 11:37:12 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:42.425 11:37:12 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:42.425 11:37:12 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:42.425 11:37:12 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:42.425 192.168.100.9' 00:13:42.425 11:37:12 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:42.425 192.168.100.9' 00:13:42.425 11:37:12 -- nvmf/common.sh@446 -- # head -n 1 00:13:42.425 11:37:12 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:42.425 11:37:12 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:42.425 192.168.100.9' 00:13:42.425 11:37:12 -- nvmf/common.sh@447 -- # tail -n +2 00:13:42.425 11:37:12 -- nvmf/common.sh@447 -- # head -n 1 00:13:42.425 11:37:12 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:42.425 11:37:12 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:42.425 11:37:12 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:42.425 11:37:12 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:42.425 11:37:12 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:42.425 11:37:12 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:42.425 11:37:12 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:42.425 11:37:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:42.425 11:37:12 -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:42.425 11:37:12 -- common/autotest_common.sh@10 -- # set +x 00:13:42.425 11:37:12 -- nvmf/common.sh@470 -- # nvmfpid=3006280 00:13:42.425 11:37:12 -- nvmf/common.sh@471 -- # waitforlisten 3006280 00:13:42.425 11:37:12 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:42.425 11:37:12 -- common/autotest_common.sh@827 -- # '[' -z 3006280 ']' 00:13:42.425 11:37:12 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.425 11:37:12 -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:42.425 11:37:12 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.425 11:37:12 -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:42.425 11:37:12 -- common/autotest_common.sh@10 -- # set +x 00:13:42.425 [2024-05-15 11:37:12.556294] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:13:42.425 [2024-05-15 11:37:12.556354] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.425 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.425 [2024-05-15 11:37:12.629244] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.425 [2024-05-15 11:37:12.710307] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.425 [2024-05-15 11:37:12.710355] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.425 [2024-05-15 11:37:12.710364] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.425 [2024-05-15 11:37:12.710373] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.425 [2024-05-15 11:37:12.710380] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.425 [2024-05-15 11:37:12.710401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.684 11:37:13 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:42.684 11:37:13 -- common/autotest_common.sh@860 -- # return 0 00:13:42.684 11:37:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:42.684 11:37:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:42.684 11:37:13 -- common/autotest_common.sh@10 -- # set +x 00:13:42.684 11:37:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.684 11:37:13 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:13:42.684 11:37:13 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:13:42.684 Unsupported transport: rdma 00:13:42.684 11:37:13 -- target/zcopy.sh@17 -- # exit 0 00:13:42.684 11:37:13 -- target/zcopy.sh@1 -- # process_shm --id 0 00:13:42.684 11:37:13 -- common/autotest_common.sh@804 -- # type=--id 00:13:42.684 11:37:13 -- common/autotest_common.sh@805 -- # id=0 00:13:42.684 11:37:13 -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:13:42.684 11:37:13 -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:42.684 11:37:13 -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:13:42.684 11:37:13 -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:13:42.684 11:37:13 -- common/autotest_common.sh@816 -- # for n in $shm_files 00:13:42.684 11:37:13 -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:42.684 nvmf_trace.0 00:13:42.942 11:37:13 -- common/autotest_common.sh@819 -- # return 0 00:13:42.942 11:37:13 -- target/zcopy.sh@1 -- # nvmftestfini 00:13:42.942 11:37:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:42.942 11:37:13 -- nvmf/common.sh@117 -- # sync 00:13:42.942 11:37:13 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:42.942 11:37:13 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:42.942 11:37:13 -- nvmf/common.sh@120 -- # set +e 00:13:42.942 11:37:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:42.942 11:37:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:42.942 rmmod nvme_rdma 00:13:42.942 rmmod nvme_fabrics 00:13:42.942 11:37:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:42.942 11:37:13 -- nvmf/common.sh@124 -- # set -e 00:13:42.942 11:37:13 -- nvmf/common.sh@125 -- # return 0 00:13:42.942 11:37:13 -- nvmf/common.sh@478 -- # '[' -n 3006280 ']' 00:13:42.942 11:37:13 -- nvmf/common.sh@479 -- # killprocess 3006280 00:13:42.942 11:37:13 -- common/autotest_common.sh@946 -- # '[' -z 3006280 ']' 00:13:42.942 11:37:13 -- common/autotest_common.sh@950 -- # kill -0 3006280 00:13:42.942 11:37:13 -- common/autotest_common.sh@951 -- # uname 00:13:42.942 11:37:13 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:42.942 11:37:13 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3006280 00:13:42.942 11:37:13 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:42.942 11:37:13 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:42.942 11:37:13 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3006280' 00:13:42.942 killing process with pid 3006280 00:13:42.942 11:37:13 -- common/autotest_common.sh@965 -- # kill 3006280 00:13:42.942 11:37:13 -- common/autotest_common.sh@970 -- # wait 3006280 00:13:43.202 11:37:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:43.202 11:37:13 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:43.202 00:13:43.202 real 0m7.655s 00:13:43.202 user 0m3.243s 00:13:43.202 sys 0m5.131s 00:13:43.202 11:37:13 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:43.202 11:37:13 -- common/autotest_common.sh@10 -- # set +x 00:13:43.202 ************************************ 00:13:43.202 END TEST nvmf_zcopy 00:13:43.202 ************************************ 00:13:43.202 11:37:13 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:13:43.202 11:37:13 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:43.202 11:37:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:43.202 11:37:13 -- common/autotest_common.sh@10 -- # set +x 00:13:43.202 ************************************ 00:13:43.202 START TEST nvmf_nmic 00:13:43.202 ************************************ 00:13:43.202 11:37:13 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:13:43.202 * Looking for test storage... 00:13:43.202 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:43.202 11:37:13 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.202 11:37:13 -- nvmf/common.sh@7 -- # uname -s 00:13:43.202 11:37:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.202 11:37:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.202 11:37:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.202 11:37:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.202 11:37:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.202 11:37:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.202 11:37:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.202 11:37:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.202 11:37:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.202 11:37:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.202 11:37:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:13:43.202 11:37:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:13:43.202 11:37:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.202 11:37:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.202 11:37:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.202 11:37:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.202 11:37:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:43.202 11:37:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.202 11:37:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.202 11:37:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.202 11:37:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.202 11:37:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.202 11:37:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.202 11:37:13 -- paths/export.sh@5 -- # export PATH 00:13:43.202 11:37:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.202 11:37:13 -- nvmf/common.sh@47 -- # : 0 00:13:43.202 11:37:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.202 11:37:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.202 11:37:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.202 11:37:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.202 11:37:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.202 11:37:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.202 11:37:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.202 11:37:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.202 11:37:13 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.202 11:37:13 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.202 11:37:13 -- target/nmic.sh@14 -- # nvmftestinit 00:13:43.202 11:37:13 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:43.202 11:37:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.202 11:37:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:43.202 11:37:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:43.202 11:37:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:43.202 11:37:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.202 11:37:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.202 11:37:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.202 11:37:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:43.202 11:37:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:43.202 11:37:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:43.202 11:37:13 -- common/autotest_common.sh@10 -- # set +x 00:13:49.770 11:37:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:49.770 11:37:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:49.770 11:37:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:49.770 11:37:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:49.770 11:37:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:49.770 11:37:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:49.770 11:37:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:49.770 11:37:19 -- nvmf/common.sh@295 -- # net_devs=() 00:13:49.770 11:37:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:49.770 11:37:19 -- nvmf/common.sh@296 -- # e810=() 00:13:49.770 11:37:19 -- nvmf/common.sh@296 -- # local -ga e810 00:13:49.770 11:37:19 -- nvmf/common.sh@297 -- # x722=() 00:13:49.770 11:37:19 -- nvmf/common.sh@297 -- # local -ga x722 00:13:49.770 11:37:19 -- nvmf/common.sh@298 -- # mlx=() 00:13:49.770 11:37:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:49.770 11:37:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:49.770 11:37:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:49.770 11:37:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:49.770 11:37:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:49.770 11:37:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:49.770 11:37:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:49.770 11:37:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:49.770 11:37:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:49.770 11:37:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:49.770 11:37:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:49.770 11:37:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:49.770 11:37:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:49.770 11:37:19 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:49.770 11:37:19 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:49.770 11:37:19 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:49.770 11:37:19 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:49.770 11:37:19 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:49.770 11:37:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:49.770 11:37:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:49.770 11:37:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:49.770 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:49.770 11:37:19 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:49.770 11:37:19 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:49.770 11:37:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:49.770 11:37:19 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:49.770 11:37:19 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:49.770 11:37:19 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:49.770 11:37:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:49.770 11:37:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:49.770 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:49.770 11:37:19 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:49.770 11:37:19 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:49.770 11:37:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:49.770 11:37:19 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:49.770 11:37:19 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:49.770 11:37:19 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:49.770 11:37:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:49.770 11:37:19 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:49.770 11:37:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:49.770 11:37:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.770 11:37:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:49.770 11:37:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.770 11:37:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:49.770 Found net devices under 0000:18:00.0: mlx_0_0 00:13:49.770 11:37:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.770 11:37:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:49.770 11:37:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.770 11:37:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:49.770 11:37:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.770 11:37:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:49.770 Found net devices under 0000:18:00.1: mlx_0_1 00:13:49.770 11:37:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.770 11:37:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:49.770 11:37:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:49.770 11:37:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:49.770 11:37:19 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:49.770 11:37:19 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:49.770 11:37:19 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:49.770 11:37:19 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:49.770 11:37:19 -- nvmf/common.sh@58 -- # uname 00:13:49.770 11:37:19 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:49.770 11:37:19 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:49.770 11:37:19 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:49.770 11:37:19 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:49.770 11:37:19 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:49.770 11:37:19 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:49.770 11:37:19 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:49.770 11:37:19 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:49.770 11:37:19 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:49.770 11:37:19 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:49.770 11:37:19 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:49.770 11:37:19 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:49.770 11:37:19 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:49.771 11:37:19 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:49.771 11:37:19 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:49.771 11:37:19 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:49.771 11:37:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:49.771 11:37:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.771 11:37:19 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:49.771 11:37:19 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:49.771 11:37:19 -- nvmf/common.sh@105 -- # continue 2 00:13:49.771 11:37:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:49.771 11:37:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.771 11:37:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:49.771 11:37:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.771 11:37:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:49.771 11:37:19 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:49.771 11:37:19 -- nvmf/common.sh@105 -- # continue 2 00:13:49.771 11:37:19 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:49.771 11:37:19 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:49.771 11:37:19 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:49.771 11:37:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:49.771 11:37:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:49.771 11:37:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:49.771 11:37:19 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:49.771 11:37:19 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:49.771 11:37:19 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:49.771 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:49.771 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:13:49.771 altname enp24s0f0np0 00:13:49.771 altname ens785f0np0 00:13:49.771 inet 192.168.100.8/24 scope global mlx_0_0 00:13:49.771 valid_lft forever preferred_lft forever 00:13:49.771 11:37:19 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:49.771 11:37:19 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:49.771 11:37:19 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:49.771 11:37:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:49.771 11:37:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:49.771 11:37:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:49.771 11:37:19 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:49.771 11:37:19 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:49.771 11:37:19 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:49.771 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:49.771 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:13:49.771 altname enp24s0f1np1 00:13:49.771 altname ens785f1np1 00:13:49.771 inet 192.168.100.9/24 scope global mlx_0_1 00:13:49.771 valid_lft forever preferred_lft forever 00:13:49.771 11:37:19 -- nvmf/common.sh@411 -- # return 0 00:13:49.771 11:37:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:49.771 11:37:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:49.771 11:37:19 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:49.771 11:37:19 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:49.771 11:37:19 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:49.771 11:37:19 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:49.771 11:37:19 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:49.771 11:37:19 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:49.771 11:37:19 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:49.771 11:37:19 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:49.771 11:37:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:49.771 11:37:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.771 11:37:19 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:49.771 11:37:19 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:49.771 11:37:19 -- nvmf/common.sh@105 -- # continue 2 00:13:49.771 11:37:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:49.771 11:37:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.771 11:37:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:49.771 11:37:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.771 11:37:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:49.771 11:37:19 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:49.771 11:37:19 -- nvmf/common.sh@105 -- # continue 2 00:13:49.771 11:37:19 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:49.771 11:37:19 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:49.771 11:37:19 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:49.771 11:37:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:49.771 11:37:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:49.771 11:37:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:49.771 11:37:19 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:49.771 11:37:19 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:49.771 11:37:19 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:49.771 11:37:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:49.771 11:37:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:49.771 11:37:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:49.771 11:37:19 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:49.771 192.168.100.9' 00:13:49.771 11:37:19 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:49.771 192.168.100.9' 00:13:49.771 11:37:19 -- nvmf/common.sh@446 -- # head -n 1 00:13:49.771 11:37:19 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:49.771 11:37:19 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:49.771 192.168.100.9' 00:13:49.771 11:37:19 -- nvmf/common.sh@447 -- # tail -n +2 00:13:49.771 11:37:19 -- nvmf/common.sh@447 -- # head -n 1 00:13:49.771 11:37:19 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:49.771 11:37:19 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:49.771 11:37:19 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:49.771 11:37:19 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:49.771 11:37:19 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:49.771 11:37:19 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:49.771 11:37:19 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:49.771 11:37:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:49.771 11:37:19 -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:49.771 11:37:19 -- common/autotest_common.sh@10 -- # set +x 00:13:49.771 11:37:19 -- nvmf/common.sh@470 -- # nvmfpid=3009207 00:13:49.771 11:37:19 -- nvmf/common.sh@471 -- # waitforlisten 3009207 00:13:49.771 11:37:19 -- common/autotest_common.sh@827 -- # '[' -z 3009207 ']' 00:13:49.771 11:37:19 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:49.771 11:37:19 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.771 11:37:19 -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:49.771 11:37:19 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.771 11:37:19 -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:49.771 11:37:19 -- common/autotest_common.sh@10 -- # set +x 00:13:49.771 [2024-05-15 11:37:19.817402] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:13:49.771 [2024-05-15 11:37:19.817455] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.771 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.771 [2024-05-15 11:37:19.889818] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.771 [2024-05-15 11:37:19.981934] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.771 [2024-05-15 11:37:19.981972] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.771 [2024-05-15 11:37:19.981982] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.771 [2024-05-15 11:37:19.981990] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.771 [2024-05-15 11:37:19.981998] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.771 [2024-05-15 11:37:19.982044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.771 [2024-05-15 11:37:19.982067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.771 [2024-05-15 11:37:19.982119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.771 [2024-05-15 11:37:19.982121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.030 11:37:20 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:50.030 11:37:20 -- common/autotest_common.sh@860 -- # return 0 00:13:50.030 11:37:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:50.030 11:37:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:50.030 11:37:20 -- common/autotest_common.sh@10 -- # set +x 00:13:50.030 11:37:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.030 11:37:20 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:50.030 11:37:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.030 11:37:20 -- common/autotest_common.sh@10 -- # set +x 00:13:50.030 [2024-05-15 11:37:20.725787] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x91af00/0x91f3f0) succeed. 00:13:50.030 [2024-05-15 11:37:20.736388] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x91c540/0x960a80) succeed. 00:13:50.288 11:37:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.288 11:37:20 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:50.288 11:37:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.289 11:37:20 -- common/autotest_common.sh@10 -- # set +x 00:13:50.289 Malloc0 00:13:50.289 11:37:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.289 11:37:20 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:50.289 11:37:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.289 11:37:20 -- common/autotest_common.sh@10 -- # set +x 00:13:50.289 11:37:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.289 11:37:20 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:50.289 11:37:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.289 11:37:20 -- common/autotest_common.sh@10 -- # set +x 00:13:50.289 11:37:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.289 11:37:20 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:50.289 11:37:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.289 11:37:20 -- common/autotest_common.sh@10 -- # set +x 00:13:50.289 [2024-05-15 11:37:20.915118] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:50.289 [2024-05-15 11:37:20.915458] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:50.289 11:37:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.289 11:37:20 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:50.289 test case1: single bdev can't be used in multiple subsystems 00:13:50.289 11:37:20 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:50.289 11:37:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.289 11:37:20 -- common/autotest_common.sh@10 -- # set +x 00:13:50.289 11:37:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.289 11:37:20 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:13:50.289 11:37:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.289 11:37:20 -- common/autotest_common.sh@10 -- # set +x 00:13:50.289 11:37:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.289 11:37:20 -- target/nmic.sh@28 -- # nmic_status=0 00:13:50.289 11:37:20 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:50.289 11:37:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.289 11:37:20 -- common/autotest_common.sh@10 -- # set +x 00:13:50.289 [2024-05-15 11:37:20.939250] bdev.c:8011:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:50.289 [2024-05-15 11:37:20.939272] subsystem.c:2015:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:50.289 [2024-05-15 11:37:20.939282] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.289 request: 00:13:50.289 { 00:13:50.289 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:50.289 "namespace": { 00:13:50.289 "bdev_name": "Malloc0", 00:13:50.289 "no_auto_visible": false 00:13:50.289 }, 00:13:50.289 "method": "nvmf_subsystem_add_ns", 00:13:50.289 "req_id": 1 00:13:50.289 } 00:13:50.289 Got JSON-RPC error response 00:13:50.289 response: 00:13:50.289 { 00:13:50.289 "code": -32602, 00:13:50.289 "message": "Invalid parameters" 00:13:50.289 } 00:13:50.289 11:37:20 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:50.289 11:37:20 -- target/nmic.sh@29 -- # nmic_status=1 00:13:50.289 11:37:20 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:50.289 11:37:20 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:50.289 Adding namespace failed - expected result. 00:13:50.289 11:37:20 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:50.289 test case2: host connect to nvmf target in multiple paths 00:13:50.289 11:37:20 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:13:50.289 11:37:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.289 11:37:20 -- common/autotest_common.sh@10 -- # set +x 00:13:50.289 [2024-05-15 11:37:20.955310] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:13:50.289 11:37:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.289 11:37:20 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:51.225 11:37:21 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:13:52.220 11:37:22 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:52.220 11:37:22 -- common/autotest_common.sh@1194 -- # local i=0 00:13:52.220 11:37:22 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.220 11:37:22 -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:52.220 11:37:22 -- common/autotest_common.sh@1201 -- # sleep 2 00:13:54.756 11:37:24 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:54.756 11:37:24 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:54.756 11:37:24 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:54.756 11:37:24 -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:54.756 11:37:24 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:54.756 11:37:24 -- common/autotest_common.sh@1204 -- # return 0 00:13:54.756 11:37:24 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:54.756 [global] 00:13:54.756 thread=1 00:13:54.756 invalidate=1 00:13:54.756 rw=write 00:13:54.756 time_based=1 00:13:54.756 runtime=1 00:13:54.756 ioengine=libaio 00:13:54.756 direct=1 00:13:54.756 bs=4096 00:13:54.756 iodepth=1 00:13:54.756 norandommap=0 00:13:54.756 numjobs=1 00:13:54.756 00:13:54.756 verify_dump=1 00:13:54.756 verify_backlog=512 00:13:54.756 verify_state_save=0 00:13:54.756 do_verify=1 00:13:54.756 verify=crc32c-intel 00:13:54.756 [job0] 00:13:54.756 filename=/dev/nvme0n1 00:13:54.756 Could not set queue depth (nvme0n1) 00:13:54.756 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.756 fio-3.35 00:13:54.756 Starting 1 thread 00:13:55.692 00:13:55.692 job0: (groupid=0, jobs=1): err= 0: pid=3010068: Wed May 15 11:37:26 2024 00:13:55.692 read: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec) 00:13:55.692 slat (nsec): min=8168, max=39359, avg=8884.74, stdev=1109.73 00:13:55.692 clat (usec): min=50, max=276, avg=63.33, stdev= 5.50 00:13:55.692 lat (usec): min=58, max=285, avg=72.21, stdev= 5.61 00:13:55.692 clat percentiles (usec): 00:13:55.692 | 1.00th=[ 53], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 60], 00:13:55.692 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 65], 00:13:55.692 | 70.00th=[ 67], 80.00th=[ 68], 90.00th=[ 70], 95.00th=[ 72], 00:13:55.692 | 99.00th=[ 76], 99.50th=[ 78], 99.90th=[ 90], 99.95th=[ 103], 00:13:55.692 | 99.99th=[ 277] 00:13:55.692 write: IOPS=6827, BW=26.7MiB/s (28.0MB/s)(26.7MiB/1001msec); 0 zone resets 00:13:55.692 slat (nsec): min=10155, max=43414, avg=10816.74, stdev=1073.57 00:13:55.692 clat (usec): min=47, max=100, avg=61.22, stdev= 4.92 00:13:55.692 lat (usec): min=59, max=143, avg=72.03, stdev= 5.05 00:13:55.692 clat percentiles (nsec): 00:13:55.692 | 1.00th=[50944], 5.00th=[52992], 10.00th=[54528], 20.00th=[57088], 00:13:55.692 | 30.00th=[58624], 40.00th=[60160], 50.00th=[61184], 60.00th=[62720], 00:13:55.692 | 70.00th=[63744], 80.00th=[65280], 90.00th=[67072], 95.00th=[69120], 00:13:55.692 | 99.00th=[73216], 99.50th=[74240], 99.90th=[79360], 99.95th=[84480], 00:13:55.692 | 99.99th=[99840] 00:13:55.692 bw ( KiB/s): min=28614, max=28614, per=100.00%, avg=28614.00, stdev= 0.00, samples=1 00:13:55.692 iops : min= 7153, max= 7153, avg=7153.00, stdev= 0.00, samples=1 00:13:55.692 lat (usec) : 50=0.21%, 100=99.76%, 250=0.03%, 500=0.01% 00:13:55.692 cpu : usr=7.30%, sys=14.40%, ctx=13491, majf=0, minf=1 00:13:55.692 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:55.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.692 issued rwts: total=6656,6834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.692 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:55.692 00:13:55.692 Run status group 0 (all jobs): 00:13:55.692 READ: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:13:55.692 WRITE: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=26.7MiB (28.0MB), run=1001-1001msec 00:13:55.692 00:13:55.692 Disk stats (read/write): 00:13:55.692 nvme0n1: ios=6033/6144, merge=0/0, ticks=345/344, in_queue=689, util=90.68% 00:13:55.692 11:37:26 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:57.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:57.600 11:37:28 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:57.600 11:37:28 -- common/autotest_common.sh@1215 -- # local i=0 00:13:57.600 11:37:28 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:57.600 11:37:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:57.600 11:37:28 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:57.600 11:37:28 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:57.600 11:37:28 -- common/autotest_common.sh@1227 -- # return 0 00:13:57.600 11:37:28 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:57.600 11:37:28 -- target/nmic.sh@53 -- # nvmftestfini 00:13:57.600 11:37:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:57.600 11:37:28 -- nvmf/common.sh@117 -- # sync 00:13:57.600 11:37:28 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:57.600 11:37:28 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:57.600 11:37:28 -- nvmf/common.sh@120 -- # set +e 00:13:57.600 11:37:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:57.600 11:37:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:57.600 rmmod nvme_rdma 00:13:57.600 rmmod nvme_fabrics 00:13:57.600 11:37:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:57.600 11:37:28 -- nvmf/common.sh@124 -- # set -e 00:13:57.600 11:37:28 -- nvmf/common.sh@125 -- # return 0 00:13:57.600 11:37:28 -- nvmf/common.sh@478 -- # '[' -n 3009207 ']' 00:13:57.600 11:37:28 -- nvmf/common.sh@479 -- # killprocess 3009207 00:13:57.600 11:37:28 -- common/autotest_common.sh@946 -- # '[' -z 3009207 ']' 00:13:57.600 11:37:28 -- common/autotest_common.sh@950 -- # kill -0 3009207 00:13:57.600 11:37:28 -- common/autotest_common.sh@951 -- # uname 00:13:57.600 11:37:28 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:57.600 11:37:28 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3009207 00:13:57.859 11:37:28 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:57.859 11:37:28 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:57.859 11:37:28 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3009207' 00:13:57.859 killing process with pid 3009207 00:13:57.859 11:37:28 -- common/autotest_common.sh@965 -- # kill 3009207 00:13:57.859 [2024-05-15 11:37:28.395670] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:57.859 11:37:28 -- common/autotest_common.sh@970 -- # wait 3009207 00:13:57.859 [2024-05-15 11:37:28.484192] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:13:58.118 11:37:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:58.118 11:37:28 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:58.118 00:13:58.118 real 0m14.931s 00:13:58.118 user 0m38.966s 00:13:58.118 sys 0m5.501s 00:13:58.118 11:37:28 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:58.118 11:37:28 -- common/autotest_common.sh@10 -- # set +x 00:13:58.118 ************************************ 00:13:58.118 END TEST nvmf_nmic 00:13:58.118 ************************************ 00:13:58.118 11:37:28 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:13:58.118 11:37:28 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:58.118 11:37:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:58.118 11:37:28 -- common/autotest_common.sh@10 -- # set +x 00:13:58.118 ************************************ 00:13:58.118 START TEST nvmf_fio_target 00:13:58.118 ************************************ 00:13:58.118 11:37:28 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:13:58.378 * Looking for test storage... 00:13:58.378 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:58.378 11:37:28 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.378 11:37:28 -- nvmf/common.sh@7 -- # uname -s 00:13:58.378 11:37:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.378 11:37:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.378 11:37:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.378 11:37:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.378 11:37:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.378 11:37:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.378 11:37:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.378 11:37:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.378 11:37:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.378 11:37:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.378 11:37:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:13:58.378 11:37:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:13:58.378 11:37:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.378 11:37:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.378 11:37:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.378 11:37:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.378 11:37:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:58.378 11:37:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.378 11:37:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.378 11:37:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.378 11:37:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.378 11:37:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.378 11:37:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.378 11:37:28 -- paths/export.sh@5 -- # export PATH 00:13:58.378 11:37:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.378 11:37:28 -- nvmf/common.sh@47 -- # : 0 00:13:58.378 11:37:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:58.378 11:37:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:58.378 11:37:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.378 11:37:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.378 11:37:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.378 11:37:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:58.378 11:37:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:58.378 11:37:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:58.378 11:37:28 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:58.378 11:37:28 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:58.378 11:37:28 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:58.378 11:37:28 -- target/fio.sh@16 -- # nvmftestinit 00:13:58.378 11:37:28 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:58.378 11:37:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.378 11:37:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:58.378 11:37:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:58.378 11:37:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:58.378 11:37:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.378 11:37:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.378 11:37:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.378 11:37:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:58.378 11:37:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:58.378 11:37:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:58.378 11:37:28 -- common/autotest_common.sh@10 -- # set +x 00:14:04.954 11:37:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:04.954 11:37:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:04.954 11:37:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:04.954 11:37:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:04.954 11:37:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:04.954 11:37:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:04.954 11:37:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:04.954 11:37:34 -- nvmf/common.sh@295 -- # net_devs=() 00:14:04.954 11:37:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:04.954 11:37:34 -- nvmf/common.sh@296 -- # e810=() 00:14:04.954 11:37:34 -- nvmf/common.sh@296 -- # local -ga e810 00:14:04.954 11:37:34 -- nvmf/common.sh@297 -- # x722=() 00:14:04.954 11:37:34 -- nvmf/common.sh@297 -- # local -ga x722 00:14:04.954 11:37:34 -- nvmf/common.sh@298 -- # mlx=() 00:14:04.954 11:37:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:04.954 11:37:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.954 11:37:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.954 11:37:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.954 11:37:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.954 11:37:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.954 11:37:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.954 11:37:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.954 11:37:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.954 11:37:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.954 11:37:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.954 11:37:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.954 11:37:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:04.954 11:37:34 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:04.954 11:37:34 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:04.954 11:37:34 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:04.954 11:37:34 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:04.954 11:37:34 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:04.954 11:37:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:04.954 11:37:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.954 11:37:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:04.954 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:04.954 11:37:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:04.954 11:37:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:04.954 11:37:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:04.954 11:37:34 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:04.954 11:37:34 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:04.954 11:37:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:04.954 11:37:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.954 11:37:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:04.954 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:04.954 11:37:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:04.954 11:37:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:04.954 11:37:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:04.954 11:37:34 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:04.954 11:37:34 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:04.954 11:37:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:04.954 11:37:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:04.954 11:37:34 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:04.954 11:37:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.954 11:37:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.954 11:37:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:04.954 11:37:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.954 11:37:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:04.954 Found net devices under 0000:18:00.0: mlx_0_0 00:14:04.954 11:37:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.954 11:37:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.954 11:37:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.954 11:37:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:04.954 11:37:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.954 11:37:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:04.954 Found net devices under 0000:18:00.1: mlx_0_1 00:14:04.954 11:37:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.954 11:37:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:04.954 11:37:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:04.954 11:37:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:04.954 11:37:34 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:14:04.954 11:37:34 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:14:04.954 11:37:34 -- nvmf/common.sh@409 -- # rdma_device_init 00:14:04.954 11:37:34 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:14:04.954 11:37:34 -- nvmf/common.sh@58 -- # uname 00:14:04.954 11:37:34 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:04.954 11:37:34 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:04.954 11:37:34 -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:04.954 11:37:34 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:04.954 11:37:34 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:04.954 11:37:34 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:04.954 11:37:34 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:04.954 11:37:34 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:04.954 11:37:34 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:14:04.954 11:37:34 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:04.954 11:37:34 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:04.954 11:37:34 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:04.955 11:37:34 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:04.955 11:37:34 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:04.955 11:37:34 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:04.955 11:37:34 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:04.955 11:37:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:04.955 11:37:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:04.955 11:37:34 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:04.955 11:37:34 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:04.955 11:37:34 -- nvmf/common.sh@105 -- # continue 2 00:14:04.955 11:37:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:04.955 11:37:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:04.955 11:37:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:04.955 11:37:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:04.955 11:37:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:04.955 11:37:34 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:04.955 11:37:34 -- nvmf/common.sh@105 -- # continue 2 00:14:04.955 11:37:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:04.955 11:37:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:04.955 11:37:34 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:04.955 11:37:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:04.955 11:37:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:04.955 11:37:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:04.955 11:37:34 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:04.955 11:37:34 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:04.955 11:37:34 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:04.955 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:04.955 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:14:04.955 altname enp24s0f0np0 00:14:04.955 altname ens785f0np0 00:14:04.955 inet 192.168.100.8/24 scope global mlx_0_0 00:14:04.955 valid_lft forever preferred_lft forever 00:14:04.955 11:37:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:04.955 11:37:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:04.955 11:37:34 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:04.955 11:37:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:04.955 11:37:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:04.955 11:37:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:04.955 11:37:34 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:04.955 11:37:34 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:04.955 11:37:34 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:04.955 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:04.955 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:14:04.955 altname enp24s0f1np1 00:14:04.955 altname ens785f1np1 00:14:04.955 inet 192.168.100.9/24 scope global mlx_0_1 00:14:04.955 valid_lft forever preferred_lft forever 00:14:04.955 11:37:34 -- nvmf/common.sh@411 -- # return 0 00:14:04.955 11:37:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:04.955 11:37:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:04.955 11:37:34 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:14:04.955 11:37:34 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:14:04.955 11:37:34 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:04.955 11:37:34 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:04.955 11:37:34 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:04.955 11:37:34 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:04.955 11:37:34 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:04.955 11:37:34 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:04.955 11:37:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:04.955 11:37:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:04.955 11:37:34 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:04.955 11:37:34 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:04.955 11:37:34 -- nvmf/common.sh@105 -- # continue 2 00:14:04.955 11:37:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:04.955 11:37:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:04.955 11:37:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:04.955 11:37:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:04.955 11:37:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:04.955 11:37:34 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:04.955 11:37:34 -- nvmf/common.sh@105 -- # continue 2 00:14:04.955 11:37:34 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:04.955 11:37:34 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:04.955 11:37:34 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:04.955 11:37:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:04.955 11:37:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:04.955 11:37:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:04.955 11:37:34 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:04.955 11:37:34 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:04.955 11:37:34 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:04.955 11:37:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:04.955 11:37:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:04.955 11:37:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:04.955 11:37:34 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:14:04.955 192.168.100.9' 00:14:04.955 11:37:34 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:04.955 192.168.100.9' 00:14:04.955 11:37:34 -- nvmf/common.sh@446 -- # head -n 1 00:14:04.955 11:37:34 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:04.955 11:37:34 -- nvmf/common.sh@447 -- # head -n 1 00:14:04.955 11:37:34 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:14:04.955 192.168.100.9' 00:14:04.955 11:37:34 -- nvmf/common.sh@447 -- # tail -n +2 00:14:04.955 11:37:34 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:04.955 11:37:34 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:14:04.955 11:37:34 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:04.955 11:37:34 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:14:04.955 11:37:34 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:14:04.955 11:37:34 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:14:04.955 11:37:34 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:04.955 11:37:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:04.955 11:37:34 -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:04.955 11:37:34 -- common/autotest_common.sh@10 -- # set +x 00:14:04.955 11:37:34 -- nvmf/common.sh@470 -- # nvmfpid=3013340 00:14:04.955 11:37:34 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:04.955 11:37:34 -- nvmf/common.sh@471 -- # waitforlisten 3013340 00:14:04.955 11:37:34 -- common/autotest_common.sh@827 -- # '[' -z 3013340 ']' 00:14:04.955 11:37:34 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.955 11:37:34 -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:04.955 11:37:34 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.955 11:37:34 -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:04.955 11:37:34 -- common/autotest_common.sh@10 -- # set +x 00:14:04.955 [2024-05-15 11:37:34.971444] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:14:04.955 [2024-05-15 11:37:34.971505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.955 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.955 [2024-05-15 11:37:35.042359] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:04.955 [2024-05-15 11:37:35.130854] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.955 [2024-05-15 11:37:35.130892] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.955 [2024-05-15 11:37:35.130905] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.955 [2024-05-15 11:37:35.130913] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.955 [2024-05-15 11:37:35.130920] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.955 [2024-05-15 11:37:35.131024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.955 [2024-05-15 11:37:35.131124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:04.955 [2024-05-15 11:37:35.131171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:04.955 [2024-05-15 11:37:35.131178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.214 11:37:35 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:05.214 11:37:35 -- common/autotest_common.sh@860 -- # return 0 00:14:05.214 11:37:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:05.214 11:37:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:05.214 11:37:35 -- common/autotest_common.sh@10 -- # set +x 00:14:05.214 11:37:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.214 11:37:35 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:05.472 [2024-05-15 11:37:36.017753] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c56f00/0x1c5b3f0) succeed. 00:14:05.472 [2024-05-15 11:37:36.028362] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c58540/0x1c9ca80) succeed. 00:14:05.472 11:37:36 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:05.731 11:37:36 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:05.731 11:37:36 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:05.990 11:37:36 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:05.990 11:37:36 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:06.249 11:37:36 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:06.249 11:37:36 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:06.249 11:37:36 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:06.249 11:37:36 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:06.507 11:37:37 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:06.766 11:37:37 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:06.766 11:37:37 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:07.025 11:37:37 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:07.025 11:37:37 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:07.025 11:37:37 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:07.026 11:37:37 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:07.284 11:37:37 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:07.543 11:37:38 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:07.543 11:37:38 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:07.802 11:37:38 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:07.802 11:37:38 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:07.802 11:37:38 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:08.061 [2024-05-15 11:37:38.671536] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:08.061 [2024-05-15 11:37:38.671942] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:08.061 11:37:38 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:08.320 11:37:38 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:08.320 11:37:39 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:09.697 11:37:40 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:09.697 11:37:40 -- common/autotest_common.sh@1194 -- # local i=0 00:14:09.697 11:37:40 -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:09.697 11:37:40 -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:14:09.697 11:37:40 -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:14:09.697 11:37:40 -- common/autotest_common.sh@1201 -- # sleep 2 00:14:11.601 11:37:42 -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:11.601 11:37:42 -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:11.601 11:37:42 -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.601 11:37:42 -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:14:11.601 11:37:42 -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.601 11:37:42 -- common/autotest_common.sh@1204 -- # return 0 00:14:11.601 11:37:42 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:11.601 [global] 00:14:11.601 thread=1 00:14:11.601 invalidate=1 00:14:11.601 rw=write 00:14:11.601 time_based=1 00:14:11.601 runtime=1 00:14:11.601 ioengine=libaio 00:14:11.601 direct=1 00:14:11.601 bs=4096 00:14:11.601 iodepth=1 00:14:11.601 norandommap=0 00:14:11.601 numjobs=1 00:14:11.601 00:14:11.601 verify_dump=1 00:14:11.601 verify_backlog=512 00:14:11.601 verify_state_save=0 00:14:11.601 do_verify=1 00:14:11.601 verify=crc32c-intel 00:14:11.601 [job0] 00:14:11.601 filename=/dev/nvme0n1 00:14:11.601 [job1] 00:14:11.601 filename=/dev/nvme0n2 00:14:11.601 [job2] 00:14:11.601 filename=/dev/nvme0n3 00:14:11.601 [job3] 00:14:11.601 filename=/dev/nvme0n4 00:14:11.601 Could not set queue depth (nvme0n1) 00:14:11.601 Could not set queue depth (nvme0n2) 00:14:11.601 Could not set queue depth (nvme0n3) 00:14:11.601 Could not set queue depth (nvme0n4) 00:14:11.859 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:11.859 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:11.859 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:11.859 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:11.859 fio-3.35 00:14:11.859 Starting 4 threads 00:14:13.236 00:14:13.236 job0: (groupid=0, jobs=1): err= 0: pid=3014578: Wed May 15 11:37:43 2024 00:14:13.236 read: IOPS=4815, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1001msec) 00:14:13.236 slat (nsec): min=8143, max=37214, avg=8833.29, stdev=1073.72 00:14:13.236 clat (usec): min=67, max=204, avg=93.43, stdev=19.74 00:14:13.236 lat (usec): min=78, max=214, avg=102.26, stdev=19.83 00:14:13.236 clat percentiles (usec): 00:14:13.236 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 82], 00:14:13.236 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 89], 00:14:13.236 | 70.00th=[ 91], 80.00th=[ 95], 90.00th=[ 133], 95.00th=[ 141], 00:14:13.236 | 99.00th=[ 155], 99.50th=[ 165], 99.90th=[ 182], 99.95th=[ 186], 00:14:13.236 | 99.99th=[ 204] 00:14:13.236 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:14:13.236 slat (nsec): min=10282, max=43606, avg=10994.54, stdev=1281.68 00:14:13.236 clat (usec): min=61, max=432, avg=84.15, stdev=15.75 00:14:13.236 lat (usec): min=72, max=443, avg=95.14, stdev=15.87 00:14:13.236 clat percentiles (usec): 00:14:13.236 | 1.00th=[ 70], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 77], 00:14:13.236 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 83], 00:14:13.236 | 70.00th=[ 85], 80.00th=[ 87], 90.00th=[ 92], 95.00th=[ 124], 00:14:13.236 | 99.00th=[ 149], 99.50th=[ 161], 99.90th=[ 190], 99.95th=[ 198], 00:14:13.236 | 99.99th=[ 433] 00:14:13.236 bw ( KiB/s): min=21584, max=21584, per=31.16%, avg=21584.00, stdev= 0.00, samples=1 00:14:13.236 iops : min= 5396, max= 5396, avg=5396.00, stdev= 0.00, samples=1 00:14:13.236 lat (usec) : 100=89.30%, 250=10.69%, 500=0.01% 00:14:13.236 cpu : usr=5.50%, sys=10.70%, ctx=9940, majf=0, minf=1 00:14:13.236 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.236 issued rwts: total=4820,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.236 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.236 job1: (groupid=0, jobs=1): err= 0: pid=3014595: Wed May 15 11:37:43 2024 00:14:13.236 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:14:13.236 slat (nsec): min=8254, max=31852, avg=9121.10, stdev=1178.00 00:14:13.236 clat (usec): min=70, max=387, avg=125.63, stdev=23.58 00:14:13.236 lat (usec): min=79, max=396, avg=134.75, stdev=23.52 00:14:13.236 clat percentiles (usec): 00:14:13.236 | 1.00th=[ 78], 5.00th=[ 82], 10.00th=[ 86], 20.00th=[ 99], 00:14:13.236 | 30.00th=[ 125], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 135], 00:14:13.236 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 00:14:13.236 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 196], 99.95th=[ 210], 00:14:13.236 | 99.99th=[ 388] 00:14:13.236 write: IOPS=3742, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1001msec); 0 zone resets 00:14:13.236 slat (nsec): min=10415, max=37796, avg=11350.79, stdev=1412.38 00:14:13.236 clat (usec): min=64, max=433, avg=122.60, stdev=26.69 00:14:13.236 lat (usec): min=76, max=444, avg=133.95, stdev=26.62 00:14:13.236 clat percentiles (usec): 00:14:13.236 | 1.00th=[ 72], 5.00th=[ 77], 10.00th=[ 81], 20.00th=[ 92], 00:14:13.236 | 30.00th=[ 119], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 133], 00:14:13.236 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 167], 00:14:13.236 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 249], 99.95th=[ 293], 00:14:13.236 | 99.99th=[ 433] 00:14:13.236 bw ( KiB/s): min=16384, max=16384, per=23.65%, avg=16384.00, stdev= 0.00, samples=1 00:14:13.236 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:14:13.236 lat (usec) : 100=21.01%, 250=78.94%, 500=0.05% 00:14:13.236 cpu : usr=4.70%, sys=7.60%, ctx=7330, majf=0, minf=1 00:14:13.236 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.236 issued rwts: total=3584,3746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.236 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.236 job2: (groupid=0, jobs=1): err= 0: pid=3014609: Wed May 15 11:37:43 2024 00:14:13.236 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:14:13.236 slat (nsec): min=8366, max=42022, avg=9018.49, stdev=1034.15 00:14:13.236 clat (usec): min=66, max=395, avg=94.65, stdev=14.50 00:14:13.236 lat (usec): min=75, max=437, avg=103.67, stdev=14.77 00:14:13.236 clat percentiles (usec): 00:14:13.236 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 88], 00:14:13.236 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 94], 00:14:13.236 | 70.00th=[ 96], 80.00th=[ 98], 90.00th=[ 103], 95.00th=[ 110], 00:14:13.236 | 99.00th=[ 167], 99.50th=[ 178], 99.90th=[ 217], 99.95th=[ 260], 00:14:13.236 | 99.99th=[ 396] 00:14:13.237 write: IOPS=4880, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1001msec); 0 zone resets 00:14:13.237 slat (nsec): min=10616, max=40853, avg=11293.58, stdev=1206.82 00:14:13.237 clat (usec): min=71, max=281, avg=91.57, stdev=14.52 00:14:13.237 lat (usec): min=83, max=292, avg=102.86, stdev=14.65 00:14:13.237 clat percentiles (usec): 00:14:13.237 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 81], 20.00th=[ 83], 00:14:13.237 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 88], 60.00th=[ 90], 00:14:13.237 | 70.00th=[ 92], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 127], 00:14:13.237 | 99.00th=[ 149], 99.50th=[ 159], 99.90th=[ 174], 99.95th=[ 180], 00:14:13.237 | 99.99th=[ 281] 00:14:13.237 bw ( KiB/s): min=20480, max=20480, per=29.57%, avg=20480.00, stdev= 0.00, samples=1 00:14:13.237 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:14:13.237 lat (usec) : 100=86.10%, 250=13.86%, 500=0.04% 00:14:13.237 cpu : usr=6.20%, sys=9.50%, ctx=9494, majf=0, minf=1 00:14:13.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.237 issued rwts: total=4608,4885,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.237 job3: (groupid=0, jobs=1): err= 0: pid=3014610: Wed May 15 11:37:43 2024 00:14:13.237 read: IOPS=3323, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1001msec) 00:14:13.237 slat (nsec): min=8386, max=33401, avg=9400.09, stdev=1377.21 00:14:13.237 clat (usec): min=78, max=307, avg=135.16, stdev=15.47 00:14:13.237 lat (usec): min=87, max=317, avg=144.56, stdev=15.51 00:14:13.237 clat percentiles (usec): 00:14:13.237 | 1.00th=[ 92], 5.00th=[ 115], 10.00th=[ 122], 20.00th=[ 127], 00:14:13.237 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:14:13.237 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 161], 00:14:13.237 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 251], 99.95th=[ 265], 00:14:13.237 | 99.99th=[ 310] 00:14:13.237 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:14:13.237 slat (nsec): min=10567, max=51099, avg=11581.59, stdev=1450.06 00:14:13.237 clat (usec): min=72, max=377, avg=129.18, stdev=14.32 00:14:13.237 lat (usec): min=82, max=389, avg=140.76, stdev=14.42 00:14:13.237 clat percentiles (usec): 00:14:13.237 | 1.00th=[ 88], 5.00th=[ 110], 10.00th=[ 117], 20.00th=[ 122], 00:14:13.237 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 133], 00:14:13.237 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 143], 95.00th=[ 151], 00:14:13.237 | 99.00th=[ 172], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 227], 00:14:13.237 | 99.99th=[ 379] 00:14:13.237 bw ( KiB/s): min=15504, max=15504, per=22.38%, avg=15504.00, stdev= 0.00, samples=1 00:14:13.237 iops : min= 3876, max= 3876, avg=3876.00, stdev= 0.00, samples=1 00:14:13.237 lat (usec) : 100=3.13%, 250=96.80%, 500=0.07% 00:14:13.237 cpu : usr=4.10%, sys=7.70%, ctx=6911, majf=0, minf=1 00:14:13.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.237 issued rwts: total=3327,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.237 00:14:13.237 Run status group 0 (all jobs): 00:14:13.237 READ: bw=63.8MiB/s (66.9MB/s), 13.0MiB/s-18.8MiB/s (13.6MB/s-19.7MB/s), io=63.8MiB (66.9MB), run=1001-1001msec 00:14:13.237 WRITE: bw=67.6MiB/s (70.9MB/s), 14.0MiB/s-20.0MiB/s (14.7MB/s-20.9MB/s), io=67.7MiB (71.0MB), run=1001-1001msec 00:14:13.237 00:14:13.237 Disk stats (read/write): 00:14:13.237 nvme0n1: ios=4146/4364, merge=0/0, ticks=364/359, in_queue=723, util=85.47% 00:14:13.237 nvme0n2: ios=3039/3072, merge=0/0, ticks=370/366, in_queue=736, util=86.25% 00:14:13.237 nvme0n3: ios=3895/4096, merge=0/0, ticks=342/347, in_queue=689, util=88.88% 00:14:13.237 nvme0n4: ios=2769/3072, merge=0/0, ticks=363/373, in_queue=736, util=89.72% 00:14:13.237 11:37:43 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:13.237 [global] 00:14:13.237 thread=1 00:14:13.237 invalidate=1 00:14:13.237 rw=randwrite 00:14:13.237 time_based=1 00:14:13.237 runtime=1 00:14:13.237 ioengine=libaio 00:14:13.237 direct=1 00:14:13.237 bs=4096 00:14:13.237 iodepth=1 00:14:13.237 norandommap=0 00:14:13.237 numjobs=1 00:14:13.237 00:14:13.237 verify_dump=1 00:14:13.237 verify_backlog=512 00:14:13.237 verify_state_save=0 00:14:13.237 do_verify=1 00:14:13.237 verify=crc32c-intel 00:14:13.237 [job0] 00:14:13.237 filename=/dev/nvme0n1 00:14:13.237 [job1] 00:14:13.237 filename=/dev/nvme0n2 00:14:13.237 [job2] 00:14:13.237 filename=/dev/nvme0n3 00:14:13.237 [job3] 00:14:13.237 filename=/dev/nvme0n4 00:14:13.237 Could not set queue depth (nvme0n1) 00:14:13.237 Could not set queue depth (nvme0n2) 00:14:13.237 Could not set queue depth (nvme0n3) 00:14:13.237 Could not set queue depth (nvme0n4) 00:14:13.237 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:13.237 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:13.237 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:13.237 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:13.237 fio-3.35 00:14:13.237 Starting 4 threads 00:14:14.615 00:14:14.615 job0: (groupid=0, jobs=1): err= 0: pid=3014903: Wed May 15 11:37:45 2024 00:14:14.615 read: IOPS=2861, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec) 00:14:14.615 slat (nsec): min=8208, max=31236, avg=9319.02, stdev=1057.70 00:14:14.615 clat (usec): min=81, max=354, avg=163.95, stdev=21.10 00:14:14.615 lat (usec): min=90, max=363, avg=173.27, stdev=21.16 00:14:14.615 clat percentiles (usec): 00:14:14.615 | 1.00th=[ 96], 5.00th=[ 141], 10.00th=[ 149], 20.00th=[ 155], 00:14:14.615 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:14:14.615 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 198], 00:14:14.615 | 99.00th=[ 227], 99.50th=[ 239], 99.90th=[ 326], 99.95th=[ 347], 00:14:14.615 | 99.99th=[ 355] 00:14:14.615 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:14:14.615 slat (nsec): min=10163, max=40183, avg=11203.70, stdev=1415.78 00:14:14.615 clat (usec): min=69, max=320, avg=148.87, stdev=30.12 00:14:14.615 lat (usec): min=79, max=335, avg=160.07, stdev=30.20 00:14:14.615 clat percentiles (usec): 00:14:14.615 | 1.00th=[ 77], 5.00th=[ 84], 10.00th=[ 93], 20.00th=[ 141], 00:14:14.615 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 155], 00:14:14.615 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 174], 95.00th=[ 196], 00:14:14.615 | 99.00th=[ 223], 99.50th=[ 265], 99.90th=[ 302], 99.95th=[ 314], 00:14:14.615 | 99.99th=[ 322] 00:14:14.615 bw ( KiB/s): min=12288, max=12288, per=20.92%, avg=12288.00, stdev= 0.00, samples=1 00:14:14.615 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:14.615 lat (usec) : 100=6.70%, 250=92.77%, 500=0.52% 00:14:14.615 cpu : usr=2.90%, sys=7.10%, ctx=5936, majf=0, minf=1 00:14:14.615 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:14.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.615 issued rwts: total=2864,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:14.615 job1: (groupid=0, jobs=1): err= 0: pid=3014904: Wed May 15 11:37:45 2024 00:14:14.615 read: IOPS=2725, BW=10.6MiB/s (11.2MB/s)(10.7MiB/1001msec) 00:14:14.615 slat (nsec): min=8307, max=21208, avg=9341.84, stdev=1049.53 00:14:14.615 clat (usec): min=85, max=367, avg=164.61, stdev=21.89 00:14:14.615 lat (usec): min=94, max=375, avg=173.95, stdev=21.93 00:14:14.615 clat percentiles (usec): 00:14:14.615 | 1.00th=[ 97], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 155], 00:14:14.615 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:14:14.615 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 204], 00:14:14.615 | 99.00th=[ 231], 99.50th=[ 255], 99.90th=[ 359], 99.95th=[ 367], 00:14:14.615 | 99.99th=[ 367] 00:14:14.615 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:14:14.615 slat (nsec): min=10215, max=38864, avg=11304.47, stdev=1413.62 00:14:14.615 clat (usec): min=71, max=362, avg=155.90, stdev=23.43 00:14:14.615 lat (usec): min=82, max=373, avg=167.21, stdev=23.45 00:14:14.615 clat percentiles (usec): 00:14:14.615 | 1.00th=[ 90], 5.00th=[ 126], 10.00th=[ 137], 20.00th=[ 145], 00:14:14.615 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 157], 00:14:14.615 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 178], 95.00th=[ 202], 00:14:14.615 | 99.00th=[ 225], 99.50th=[ 249], 99.90th=[ 326], 99.95th=[ 359], 00:14:14.615 | 99.99th=[ 363] 00:14:14.615 bw ( KiB/s): min=12288, max=12288, per=20.92%, avg=12288.00, stdev= 0.00, samples=1 00:14:14.615 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:14.615 lat (usec) : 100=1.88%, 250=97.62%, 500=0.50% 00:14:14.615 cpu : usr=3.70%, sys=6.20%, ctx=5800, majf=0, minf=1 00:14:14.615 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:14.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.616 issued rwts: total=2728,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.616 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:14.616 job2: (groupid=0, jobs=1): err= 0: pid=3014905: Wed May 15 11:37:45 2024 00:14:14.616 read: IOPS=2929, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1001msec) 00:14:14.616 slat (nsec): min=8484, max=28069, avg=9537.59, stdev=1122.94 00:14:14.616 clat (usec): min=78, max=225, avg=160.24, stdev=18.36 00:14:14.616 lat (usec): min=88, max=235, avg=169.77, stdev=18.37 00:14:14.616 clat percentiles (usec): 00:14:14.616 | 1.00th=[ 94], 5.00th=[ 117], 10.00th=[ 145], 20.00th=[ 153], 00:14:14.616 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:14:14.616 | 70.00th=[ 169], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 184], 00:14:14.616 | 99.00th=[ 198], 99.50th=[ 204], 99.90th=[ 221], 99.95th=[ 227], 00:14:14.616 | 99.99th=[ 227] 00:14:14.616 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:14:14.616 slat (nsec): min=10459, max=39445, avg=11374.78, stdev=1169.18 00:14:14.616 clat (usec): min=72, max=216, avg=147.91, stdev=19.52 00:14:14.616 lat (usec): min=84, max=228, avg=159.28, stdev=19.49 00:14:14.616 clat percentiles (usec): 00:14:14.616 | 1.00th=[ 86], 5.00th=[ 103], 10.00th=[ 126], 20.00th=[ 139], 00:14:14.616 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 155], 00:14:14.616 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 176], 00:14:14.616 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 206], 99.95th=[ 212], 00:14:14.616 | 99.99th=[ 217] 00:14:14.616 bw ( KiB/s): min=12288, max=12288, per=20.92%, avg=12288.00, stdev= 0.00, samples=1 00:14:14.616 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:14.616 lat (usec) : 100=3.30%, 250=96.70% 00:14:14.616 cpu : usr=3.40%, sys=6.90%, ctx=6004, majf=0, minf=1 00:14:14.616 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:14.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.616 issued rwts: total=2932,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.616 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:14.616 job3: (groupid=0, jobs=1): err= 0: pid=3014906: Wed May 15 11:37:45 2024 00:14:14.616 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:14:14.616 slat (nsec): min=8307, max=27095, avg=8904.96, stdev=902.16 00:14:14.616 clat (usec): min=68, max=341, avg=84.78, stdev= 8.03 00:14:14.616 lat (usec): min=79, max=350, avg=93.68, stdev= 8.08 00:14:14.616 clat percentiles (usec): 00:14:14.616 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 81], 00:14:14.616 | 30.00th=[ 82], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 86], 00:14:14.616 | 70.00th=[ 87], 80.00th=[ 89], 90.00th=[ 91], 95.00th=[ 94], 00:14:14.616 | 99.00th=[ 121], 99.50th=[ 135], 99.90th=[ 145], 99.95th=[ 151], 00:14:14.616 | 99.99th=[ 343] 00:14:14.616 write: IOPS=5480, BW=21.4MiB/s (22.4MB/s)(21.4MiB/1001msec); 0 zone resets 00:14:14.616 slat (nsec): min=6248, max=31949, avg=10862.52, stdev=1140.04 00:14:14.616 clat (usec): min=59, max=112, avg=80.15, stdev= 5.04 00:14:14.616 lat (usec): min=72, max=120, avg=91.01, stdev= 5.18 00:14:14.616 clat percentiles (usec): 00:14:14.616 | 1.00th=[ 71], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 77], 00:14:14.616 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 82], 00:14:14.616 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 87], 95.00th=[ 89], 00:14:14.616 | 99.00th=[ 94], 99.50th=[ 97], 99.90th=[ 101], 99.95th=[ 105], 00:14:14.616 | 99.99th=[ 114] 00:14:14.616 bw ( KiB/s): min=21968, max=21968, per=37.39%, avg=21968.00, stdev= 0.00, samples=1 00:14:14.616 iops : min= 5492, max= 5492, avg=5492.00, stdev= 0.00, samples=1 00:14:14.616 lat (usec) : 100=99.22%, 250=0.77%, 500=0.01% 00:14:14.616 cpu : usr=6.40%, sys=10.80%, ctx=10606, majf=0, minf=1 00:14:14.616 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:14.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.616 issued rwts: total=5120,5486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.616 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:14.616 00:14:14.616 Run status group 0 (all jobs): 00:14:14.616 READ: bw=53.2MiB/s (55.8MB/s), 10.6MiB/s-20.0MiB/s (11.2MB/s-20.9MB/s), io=53.3MiB (55.9MB), run=1001-1001msec 00:14:14.616 WRITE: bw=57.4MiB/s (60.2MB/s), 12.0MiB/s-21.4MiB/s (12.6MB/s-22.4MB/s), io=57.4MiB (60.2MB), run=1001-1001msec 00:14:14.616 00:14:14.616 Disk stats (read/write): 00:14:14.616 nvme0n1: ios=2446/2560, merge=0/0, ticks=389/351, in_queue=740, util=83.97% 00:14:14.616 nvme0n2: ios=2266/2560, merge=0/0, ticks=349/374, in_queue=723, util=84.97% 00:14:14.616 nvme0n3: ios=2433/2560, merge=0/0, ticks=384/362, in_queue=746, util=88.22% 00:14:14.616 nvme0n4: ios=4202/4608, merge=0/0, ticks=326/349, in_queue=675, util=89.45% 00:14:14.616 11:37:45 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:14.616 [global] 00:14:14.616 thread=1 00:14:14.616 invalidate=1 00:14:14.616 rw=write 00:14:14.616 time_based=1 00:14:14.616 runtime=1 00:14:14.616 ioengine=libaio 00:14:14.616 direct=1 00:14:14.616 bs=4096 00:14:14.616 iodepth=128 00:14:14.616 norandommap=0 00:14:14.616 numjobs=1 00:14:14.616 00:14:14.616 verify_dump=1 00:14:14.616 verify_backlog=512 00:14:14.616 verify_state_save=0 00:14:14.616 do_verify=1 00:14:14.616 verify=crc32c-intel 00:14:14.616 [job0] 00:14:14.616 filename=/dev/nvme0n1 00:14:14.616 [job1] 00:14:14.616 filename=/dev/nvme0n2 00:14:14.616 [job2] 00:14:14.616 filename=/dev/nvme0n3 00:14:14.616 [job3] 00:14:14.616 filename=/dev/nvme0n4 00:14:14.616 Could not set queue depth (nvme0n1) 00:14:14.616 Could not set queue depth (nvme0n2) 00:14:14.616 Could not set queue depth (nvme0n3) 00:14:14.616 Could not set queue depth (nvme0n4) 00:14:14.875 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:14.875 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:14.875 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:14.875 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:14.875 fio-3.35 00:14:14.875 Starting 4 threads 00:14:16.252 00:14:16.252 job0: (groupid=0, jobs=1): err= 0: pid=3015208: Wed May 15 11:37:46 2024 00:14:16.252 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:14:16.252 slat (usec): min=2, max=5414, avg=67.36, stdev=312.52 00:14:16.252 clat (usec): min=2939, max=22325, avg=8768.80, stdev=2997.33 00:14:16.252 lat (usec): min=2942, max=22338, avg=8836.16, stdev=3014.63 00:14:16.252 clat percentiles (usec): 00:14:16.252 | 1.00th=[ 4228], 5.00th=[ 5014], 10.00th=[ 5866], 20.00th=[ 6456], 00:14:16.252 | 30.00th=[ 6849], 40.00th=[ 7111], 50.00th=[ 7767], 60.00th=[ 8979], 00:14:16.252 | 70.00th=[ 9896], 80.00th=[11076], 90.00th=[13173], 95.00th=[14615], 00:14:16.252 | 99.00th=[17695], 99.50th=[18482], 99.90th=[19268], 99.95th=[20579], 00:14:16.252 | 99.99th=[22414] 00:14:16.252 write: IOPS=7577, BW=29.6MiB/s (31.0MB/s)(29.6MiB/1001msec); 0 zone resets 00:14:16.252 slat (usec): min=2, max=5635, avg=64.37, stdev=300.81 00:14:16.252 clat (usec): min=376, max=20642, avg=8422.45, stdev=2789.84 00:14:16.252 lat (usec): min=835, max=20651, avg=8486.82, stdev=2804.44 00:14:16.252 clat percentiles (usec): 00:14:16.252 | 1.00th=[ 3556], 5.00th=[ 5407], 10.00th=[ 5932], 20.00th=[ 6325], 00:14:16.252 | 30.00th=[ 6652], 40.00th=[ 6980], 50.00th=[ 7570], 60.00th=[ 8455], 00:14:16.252 | 70.00th=[ 9372], 80.00th=[10421], 90.00th=[12518], 95.00th=[14222], 00:14:16.252 | 99.00th=[16909], 99.50th=[17695], 99.90th=[20579], 99.95th=[20579], 00:14:16.252 | 99.99th=[20579] 00:14:16.252 bw ( KiB/s): min=28672, max=28672, per=26.57%, avg=28672.00, stdev= 0.00, samples=1 00:14:16.252 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:14:16.252 lat (usec) : 500=0.01%, 1000=0.07% 00:14:16.252 lat (msec) : 2=0.09%, 4=0.89%, 10=72.71%, 20=26.10%, 50=0.12% 00:14:16.252 cpu : usr=3.40%, sys=6.20%, ctx=1378, majf=0, minf=1 00:14:16.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:16.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.252 issued rwts: total=7168,7585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.252 job1: (groupid=0, jobs=1): err= 0: pid=3015209: Wed May 15 11:37:46 2024 00:14:16.252 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:14:16.252 slat (usec): min=2, max=6102, avg=85.90, stdev=384.33 00:14:16.252 clat (usec): min=2347, max=22386, avg=11092.56, stdev=3828.22 00:14:16.252 lat (usec): min=2354, max=23337, avg=11178.46, stdev=3847.99 00:14:16.252 clat percentiles (usec): 00:14:16.252 | 1.00th=[ 5211], 5.00th=[ 6325], 10.00th=[ 6587], 20.00th=[ 6980], 00:14:16.252 | 30.00th=[ 8029], 40.00th=[ 9634], 50.00th=[10814], 60.00th=[12125], 00:14:16.252 | 70.00th=[13435], 80.00th=[14484], 90.00th=[16188], 95.00th=[17957], 00:14:16.252 | 99.00th=[20055], 99.50th=[20317], 99.90th=[21103], 99.95th=[22414], 00:14:16.252 | 99.99th=[22414] 00:14:16.252 write: IOPS=6135, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:14:16.252 slat (usec): min=2, max=5384, avg=72.98, stdev=327.70 00:14:16.252 clat (usec): min=1761, max=22378, avg=9546.55, stdev=3707.34 00:14:16.252 lat (usec): min=2293, max=22383, avg=9619.53, stdev=3727.29 00:14:16.252 clat percentiles (usec): 00:14:16.252 | 1.00th=[ 3818], 5.00th=[ 5211], 10.00th=[ 5932], 20.00th=[ 6390], 00:14:16.252 | 30.00th=[ 6652], 40.00th=[ 7308], 50.00th=[ 8717], 60.00th=[10028], 00:14:16.252 | 70.00th=[11207], 80.00th=[12780], 90.00th=[15139], 95.00th=[16581], 00:14:16.252 | 99.00th=[18482], 99.50th=[20317], 99.90th=[22414], 99.95th=[22414], 00:14:16.252 | 99.99th=[22414] 00:14:16.252 bw ( KiB/s): min=21560, max=27592, per=22.77%, avg=24576.00, stdev=4265.27, samples=2 00:14:16.252 iops : min= 5390, max= 6898, avg=6144.00, stdev=1066.32, samples=2 00:14:16.252 lat (msec) : 2=0.01%, 4=0.92%, 10=50.64%, 20=47.42%, 50=1.01% 00:14:16.252 cpu : usr=3.20%, sys=5.00%, ctx=1489, majf=0, minf=1 00:14:16.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:16.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.252 issued rwts: total=6144,6148,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.252 job2: (groupid=0, jobs=1): err= 0: pid=3015210: Wed May 15 11:37:46 2024 00:14:16.252 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:14:16.252 slat (usec): min=2, max=5239, avg=81.91, stdev=370.30 00:14:16.252 clat (usec): min=3712, max=24147, avg=10555.13, stdev=3167.97 00:14:16.252 lat (usec): min=3720, max=24150, avg=10637.04, stdev=3182.01 00:14:16.252 clat percentiles (usec): 00:14:16.252 | 1.00th=[ 5473], 5.00th=[ 6783], 10.00th=[ 7373], 20.00th=[ 7701], 00:14:16.252 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 9765], 60.00th=[10945], 00:14:16.252 | 70.00th=[12256], 80.00th=[13566], 90.00th=[15008], 95.00th=[16057], 00:14:16.252 | 99.00th=[19268], 99.50th=[20055], 99.90th=[22676], 99.95th=[22676], 00:14:16.252 | 99.99th=[24249] 00:14:16.252 write: IOPS=6231, BW=24.3MiB/s (25.5MB/s)(24.4MiB/1002msec); 0 zone resets 00:14:16.252 slat (usec): min=2, max=6127, avg=75.25, stdev=334.09 00:14:16.252 clat (usec): min=1742, max=22638, avg=9936.61, stdev=3160.30 00:14:16.252 lat (usec): min=3413, max=22649, avg=10011.86, stdev=3173.53 00:14:16.252 clat percentiles (usec): 00:14:16.252 | 1.00th=[ 5211], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 7373], 00:14:16.252 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9896], 00:14:16.252 | 70.00th=[11076], 80.00th=[13173], 90.00th=[14615], 95.00th=[15008], 00:14:16.252 | 99.00th=[20317], 99.50th=[21365], 99.90th=[22676], 99.95th=[22676], 00:14:16.252 | 99.99th=[22676] 00:14:16.252 bw ( KiB/s): min=24576, max=24576, per=22.77%, avg=24576.00, stdev= 0.00, samples=2 00:14:16.252 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:14:16.252 lat (msec) : 2=0.01%, 4=0.15%, 10=56.09%, 20=42.95%, 50=0.81% 00:14:16.252 cpu : usr=2.20%, sys=6.59%, ctx=1133, majf=0, minf=1 00:14:16.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:16.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.252 issued rwts: total=6144,6244,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.252 job3: (groupid=0, jobs=1): err= 0: pid=3015211: Wed May 15 11:37:46 2024 00:14:16.252 read: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec) 00:14:16.252 slat (usec): min=2, max=5275, avg=70.21, stdev=335.80 00:14:16.253 clat (usec): min=547, max=19945, avg=9320.26, stdev=2899.05 00:14:16.253 lat (usec): min=628, max=20553, avg=9390.47, stdev=2914.43 00:14:16.253 clat percentiles (usec): 00:14:16.253 | 1.00th=[ 3064], 5.00th=[ 4883], 10.00th=[ 6194], 20.00th=[ 7046], 00:14:16.253 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9503], 00:14:16.253 | 70.00th=[10683], 80.00th=[11731], 90.00th=[13435], 95.00th=[14615], 00:14:16.253 | 99.00th=[16450], 99.50th=[18744], 99.90th=[19268], 99.95th=[20055], 00:14:16.253 | 99.99th=[20055] 00:14:16.253 write: IOPS=7049, BW=27.5MiB/s (28.9MB/s)(27.6MiB/1001msec); 0 zone resets 00:14:16.253 slat (usec): min=2, max=4362, avg=69.10, stdev=308.87 00:14:16.253 clat (usec): min=533, max=18780, avg=9168.36, stdev=2798.09 00:14:16.253 lat (usec): min=1027, max=18796, avg=9237.46, stdev=2807.74 00:14:16.253 clat percentiles (usec): 00:14:16.253 | 1.00th=[ 2376], 5.00th=[ 4883], 10.00th=[ 5932], 20.00th=[ 7111], 00:14:16.253 | 30.00th=[ 7635], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9503], 00:14:16.253 | 70.00th=[10683], 80.00th=[11731], 90.00th=[13042], 95.00th=[13960], 00:14:16.253 | 99.00th=[15270], 99.50th=[15664], 99.90th=[16909], 99.95th=[18744], 00:14:16.253 | 99.99th=[18744] 00:14:16.253 bw ( KiB/s): min=30008, max=30008, per=27.81%, avg=30008.00, stdev= 0.00, samples=1 00:14:16.253 iops : min= 7502, max= 7502, avg=7502.00, stdev= 0.00, samples=1 00:14:16.253 lat (usec) : 750=0.01%, 1000=0.01% 00:14:16.253 lat (msec) : 2=0.51%, 4=2.48%, 10=61.53%, 20=35.46% 00:14:16.253 cpu : usr=3.50%, sys=6.40%, ctx=1245, majf=0, minf=1 00:14:16.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:16.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.253 issued rwts: total=6656,7057,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.253 00:14:16.253 Run status group 0 (all jobs): 00:14:16.253 READ: bw=102MiB/s (107MB/s), 24.0MiB/s-28.0MiB/s (25.1MB/s-29.3MB/s), io=102MiB (107MB), run=1001-1002msec 00:14:16.253 WRITE: bw=105MiB/s (111MB/s), 24.0MiB/s-29.6MiB/s (25.1MB/s-31.0MB/s), io=106MiB (111MB), run=1001-1002msec 00:14:16.253 00:14:16.253 Disk stats (read/write): 00:14:16.253 nvme0n1: ios=5793/6144, merge=0/0, ticks=15838/16325, in_queue=32163, util=85.27% 00:14:16.253 nvme0n2: ios=5120/5369, merge=0/0, ticks=17730/16354, in_queue=34084, util=85.95% 00:14:16.253 nvme0n3: ios=5120/5377, merge=0/0, ticks=15144/14443, in_queue=29587, util=88.61% 00:14:16.253 nvme0n4: ios=5633/6144, merge=0/0, ticks=25208/24330, in_queue=49538, util=88.94% 00:14:16.253 11:37:46 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:16.253 [global] 00:14:16.253 thread=1 00:14:16.253 invalidate=1 00:14:16.253 rw=randwrite 00:14:16.253 time_based=1 00:14:16.253 runtime=1 00:14:16.253 ioengine=libaio 00:14:16.253 direct=1 00:14:16.253 bs=4096 00:14:16.253 iodepth=128 00:14:16.253 norandommap=0 00:14:16.253 numjobs=1 00:14:16.253 00:14:16.253 verify_dump=1 00:14:16.253 verify_backlog=512 00:14:16.253 verify_state_save=0 00:14:16.253 do_verify=1 00:14:16.253 verify=crc32c-intel 00:14:16.253 [job0] 00:14:16.253 filename=/dev/nvme0n1 00:14:16.253 [job1] 00:14:16.253 filename=/dev/nvme0n2 00:14:16.253 [job2] 00:14:16.253 filename=/dev/nvme0n3 00:14:16.253 [job3] 00:14:16.253 filename=/dev/nvme0n4 00:14:16.253 Could not set queue depth (nvme0n1) 00:14:16.253 Could not set queue depth (nvme0n2) 00:14:16.253 Could not set queue depth (nvme0n3) 00:14:16.253 Could not set queue depth (nvme0n4) 00:14:16.511 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:16.511 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:16.511 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:16.511 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:16.511 fio-3.35 00:14:16.511 Starting 4 threads 00:14:17.894 00:14:17.894 job0: (groupid=0, jobs=1): err= 0: pid=3015508: Wed May 15 11:37:48 2024 00:14:17.894 read: IOPS=6161, BW=24.1MiB/s (25.2MB/s)(24.1MiB/1003msec) 00:14:17.894 slat (usec): min=2, max=4360, avg=74.26, stdev=338.12 00:14:17.894 clat (usec): min=1728, max=19436, avg=9704.32, stdev=3251.73 00:14:17.894 lat (usec): min=2777, max=19447, avg=9778.58, stdev=3264.61 00:14:17.894 clat percentiles (usec): 00:14:17.894 | 1.00th=[ 4424], 5.00th=[ 5080], 10.00th=[ 5669], 20.00th=[ 6849], 00:14:17.894 | 30.00th=[ 7570], 40.00th=[ 8455], 50.00th=[ 9241], 60.00th=[10290], 00:14:17.894 | 70.00th=[11207], 80.00th=[12518], 90.00th=[14484], 95.00th=[15533], 00:14:17.894 | 99.00th=[18482], 99.50th=[19006], 99.90th=[19268], 99.95th=[19530], 00:14:17.894 | 99.99th=[19530] 00:14:17.894 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:14:17.894 slat (usec): min=2, max=4450, avg=77.01, stdev=344.18 00:14:17.894 clat (usec): min=2495, max=16595, avg=10062.41, stdev=3216.57 00:14:17.894 lat (usec): min=2498, max=16810, avg=10139.42, stdev=3227.81 00:14:17.894 clat percentiles (usec): 00:14:17.894 | 1.00th=[ 3884], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 6718], 00:14:17.894 | 30.00th=[ 8225], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[11207], 00:14:17.894 | 70.00th=[11994], 80.00th=[13042], 90.00th=[14353], 95.00th=[14877], 00:14:17.894 | 99.00th=[16188], 99.50th=[16450], 99.90th=[16581], 99.95th=[16581], 00:14:17.894 | 99.99th=[16581] 00:14:17.894 bw ( KiB/s): min=23848, max=28672, per=25.64%, avg=26260.00, stdev=3411.08, samples=2 00:14:17.894 iops : min= 5962, max= 7168, avg=6565.00, stdev=852.77, samples=2 00:14:17.894 lat (msec) : 2=0.01%, 4=0.71%, 10=51.15%, 20=48.14% 00:14:17.894 cpu : usr=2.79%, sys=6.19%, ctx=1181, majf=0, minf=1 00:14:17.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:17.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:17.894 issued rwts: total=6180,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:17.894 job1: (groupid=0, jobs=1): err= 0: pid=3015509: Wed May 15 11:37:48 2024 00:14:17.894 read: IOPS=6204, BW=24.2MiB/s (25.4MB/s)(24.3MiB/1002msec) 00:14:17.894 slat (usec): min=2, max=5070, avg=77.55, stdev=346.77 00:14:17.894 clat (usec): min=487, max=18443, avg=10205.70, stdev=3421.18 00:14:17.894 lat (usec): min=1153, max=21446, avg=10283.25, stdev=3436.69 00:14:17.894 clat percentiles (usec): 00:14:17.894 | 1.00th=[ 3687], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 6783], 00:14:17.894 | 30.00th=[ 7570], 40.00th=[ 8717], 50.00th=[10159], 60.00th=[11338], 00:14:17.894 | 70.00th=[12256], 80.00th=[13566], 90.00th=[15008], 95.00th=[15926], 00:14:17.894 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18220], 99.95th=[18220], 00:14:17.894 | 99.99th=[18482] 00:14:17.894 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:14:17.894 slat (usec): min=2, max=5268, avg=73.23, stdev=328.09 00:14:17.894 clat (usec): min=3188, max=16598, avg=9510.27, stdev=3268.98 00:14:17.894 lat (usec): min=3198, max=18245, avg=9583.50, stdev=3286.10 00:14:17.894 clat percentiles (usec): 00:14:17.894 | 1.00th=[ 4490], 5.00th=[ 5473], 10.00th=[ 5866], 20.00th=[ 6325], 00:14:17.894 | 30.00th=[ 6783], 40.00th=[ 7898], 50.00th=[ 8979], 60.00th=[10421], 00:14:17.894 | 70.00th=[11469], 80.00th=[12649], 90.00th=[14353], 95.00th=[15270], 00:14:17.894 | 99.00th=[16450], 99.50th=[16450], 99.90th=[16450], 99.95th=[16581], 00:14:17.894 | 99.99th=[16581] 00:14:17.894 bw ( KiB/s): min=24576, max=28672, per=25.99%, avg=26624.00, stdev=2896.31, samples=2 00:14:17.894 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:14:17.894 lat (usec) : 500=0.01% 00:14:17.894 lat (msec) : 2=0.17%, 4=0.61%, 10=52.44%, 20=46.77% 00:14:17.894 cpu : usr=4.40%, sys=5.29%, ctx=1253, majf=0, minf=1 00:14:17.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:17.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:17.894 issued rwts: total=6217,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:17.894 job2: (groupid=0, jobs=1): err= 0: pid=3015510: Wed May 15 11:37:48 2024 00:14:17.894 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:14:17.894 slat (usec): min=2, max=5898, avg=81.23, stdev=387.35 00:14:17.894 clat (usec): min=3750, max=20791, avg=10498.91, stdev=3022.09 00:14:17.894 lat (usec): min=4332, max=20794, avg=10580.14, stdev=3032.68 00:14:17.894 clat percentiles (usec): 00:14:17.894 | 1.00th=[ 5604], 5.00th=[ 6325], 10.00th=[ 6783], 20.00th=[ 8160], 00:14:17.894 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[10814], 00:14:17.894 | 70.00th=[11600], 80.00th=[12518], 90.00th=[14877], 95.00th=[16319], 00:14:17.894 | 99.00th=[19268], 99.50th=[20055], 99.90th=[20841], 99.95th=[20841], 00:14:17.894 | 99.99th=[20841] 00:14:17.894 write: IOPS=6210, BW=24.3MiB/s (25.4MB/s)(24.3MiB/1003msec); 0 zone resets 00:14:17.894 slat (usec): min=2, max=5609, avg=76.23, stdev=351.79 00:14:17.894 clat (usec): min=620, max=22243, avg=10015.94, stdev=3074.18 00:14:17.894 lat (usec): min=2679, max=22253, avg=10092.16, stdev=3088.37 00:14:17.894 clat percentiles (usec): 00:14:17.894 | 1.00th=[ 4752], 5.00th=[ 6390], 10.00th=[ 6980], 20.00th=[ 7767], 00:14:17.894 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9896], 00:14:17.894 | 70.00th=[11076], 80.00th=[12387], 90.00th=[14877], 95.00th=[16319], 00:14:17.894 | 99.00th=[17957], 99.50th=[18482], 99.90th=[22152], 99.95th=[22152], 00:14:17.894 | 99.99th=[22152] 00:14:17.894 bw ( KiB/s): min=21408, max=27744, per=23.99%, avg=24576.00, stdev=4480.23, samples=2 00:14:17.894 iops : min= 5352, max= 6936, avg=6144.00, stdev=1120.06, samples=2 00:14:17.894 lat (usec) : 750=0.01% 00:14:17.894 lat (msec) : 4=0.15%, 10=54.43%, 20=44.93%, 50=0.48% 00:14:17.894 cpu : usr=3.69%, sys=5.39%, ctx=1074, majf=0, minf=1 00:14:17.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:17.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:17.894 issued rwts: total=6144,6229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:17.894 job3: (groupid=0, jobs=1): err= 0: pid=3015511: Wed May 15 11:37:48 2024 00:14:17.894 read: IOPS=6100, BW=23.8MiB/s (25.0MB/s)(23.9MiB/1003msec) 00:14:17.894 slat (usec): min=2, max=4159, avg=79.69, stdev=359.42 00:14:17.894 clat (usec): min=1459, max=18682, avg=10412.12, stdev=3013.94 00:14:17.894 lat (usec): min=4421, max=19219, avg=10491.81, stdev=3027.37 00:14:17.894 clat percentiles (usec): 00:14:17.894 | 1.00th=[ 5342], 5.00th=[ 6259], 10.00th=[ 6718], 20.00th=[ 7570], 00:14:17.894 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10814], 00:14:17.894 | 70.00th=[12256], 80.00th=[13304], 90.00th=[14615], 95.00th=[15795], 00:14:17.894 | 99.00th=[17433], 99.50th=[17433], 99.90th=[18482], 99.95th=[18482], 00:14:17.894 | 99.99th=[18744] 00:14:17.894 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:14:17.894 slat (usec): min=2, max=4866, avg=78.98, stdev=366.65 00:14:17.894 clat (usec): min=3930, max=16710, avg=10275.05, stdev=2807.96 00:14:17.894 lat (usec): min=3933, max=16946, avg=10354.03, stdev=2817.18 00:14:17.894 clat percentiles (usec): 00:14:17.894 | 1.00th=[ 5407], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 7373], 00:14:17.894 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[10159], 60.00th=[11207], 00:14:17.894 | 70.00th=[11863], 80.00th=[12911], 90.00th=[14353], 95.00th=[14746], 00:14:17.894 | 99.00th=[16188], 99.50th=[16319], 99.90th=[16712], 99.95th=[16712], 00:14:17.894 | 99.99th=[16712] 00:14:17.894 bw ( KiB/s): min=20480, max=28672, per=23.99%, avg=24576.00, stdev=5792.62, samples=2 00:14:17.894 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:14:17.894 lat (msec) : 2=0.01%, 4=0.12%, 10=49.59%, 20=50.28% 00:14:17.894 cpu : usr=2.89%, sys=6.29%, ctx=1097, majf=0, minf=1 00:14:17.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:17.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:17.894 issued rwts: total=6119,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:17.894 00:14:17.894 Run status group 0 (all jobs): 00:14:17.894 READ: bw=96.0MiB/s (101MB/s), 23.8MiB/s-24.2MiB/s (25.0MB/s-25.4MB/s), io=96.3MiB (101MB), run=1002-1003msec 00:14:17.894 WRITE: bw=100MiB/s (105MB/s), 23.9MiB/s-25.9MiB/s (25.1MB/s-27.2MB/s), io=100MiB (105MB), run=1002-1003msec 00:14:17.894 00:14:17.894 Disk stats (read/write): 00:14:17.894 nvme0n1: ios=5682/5699, merge=0/0, ticks=14338/14774, in_queue=29112, util=85.67% 00:14:17.894 nvme0n2: ios=5120/5297, merge=0/0, ticks=14664/14028, in_queue=28692, util=86.06% 00:14:17.894 nvme0n3: ios=5120/5418, merge=0/0, ticks=14360/15280, in_queue=29640, util=88.71% 00:14:17.894 nvme0n4: ios=5120/5433, merge=0/0, ticks=14675/15800, in_queue=30475, util=89.56% 00:14:17.894 11:37:48 -- target/fio.sh@55 -- # sync 00:14:17.894 11:37:48 -- target/fio.sh@59 -- # fio_pid=3015700 00:14:17.894 11:37:48 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:17.894 11:37:48 -- target/fio.sh@61 -- # sleep 3 00:14:17.894 [global] 00:14:17.894 thread=1 00:14:17.894 invalidate=1 00:14:17.894 rw=read 00:14:17.894 time_based=1 00:14:17.894 runtime=10 00:14:17.894 ioengine=libaio 00:14:17.894 direct=1 00:14:17.894 bs=4096 00:14:17.894 iodepth=1 00:14:17.894 norandommap=1 00:14:17.894 numjobs=1 00:14:17.894 00:14:17.894 [job0] 00:14:17.894 filename=/dev/nvme0n1 00:14:17.894 [job1] 00:14:17.894 filename=/dev/nvme0n2 00:14:17.894 [job2] 00:14:17.894 filename=/dev/nvme0n3 00:14:17.894 [job3] 00:14:17.895 filename=/dev/nvme0n4 00:14:17.895 Could not set queue depth (nvme0n1) 00:14:17.895 Could not set queue depth (nvme0n2) 00:14:17.895 Could not set queue depth (nvme0n3) 00:14:17.895 Could not set queue depth (nvme0n4) 00:14:18.151 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:18.151 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:18.151 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:18.151 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:18.151 fio-3.35 00:14:18.151 Starting 4 threads 00:14:20.670 11:37:51 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:20.930 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=72609792, buflen=4096 00:14:20.930 fio: pid=3015816, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:20.930 11:37:51 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:21.211 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=92467200, buflen=4096 00:14:21.211 fio: pid=3015815, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:21.211 11:37:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:21.211 11:37:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:21.211 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=25075712, buflen=4096 00:14:21.211 fio: pid=3015813, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:21.480 11:37:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:21.480 11:37:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:21.480 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=37629952, buflen=4096 00:14:21.480 fio: pid=3015814, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:21.480 11:37:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:21.480 11:37:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:21.480 00:14:21.480 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3015813: Wed May 15 11:37:52 2024 00:14:21.480 read: IOPS=7328, BW=28.6MiB/s (30.0MB/s)(87.9MiB/3071msec) 00:14:21.480 slat (usec): min=4, max=23328, avg=12.15, stdev=246.21 00:14:21.480 clat (usec): min=50, max=551, avg=122.81, stdev=32.71 00:14:21.480 lat (usec): min=59, max=23429, avg=134.96, stdev=248.17 00:14:21.480 clat percentiles (usec): 00:14:21.480 | 1.00th=[ 63], 5.00th=[ 76], 10.00th=[ 79], 20.00th=[ 84], 00:14:21.480 | 30.00th=[ 90], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 143], 00:14:21.480 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 161], 00:14:21.480 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 217], 99.95th=[ 265], 00:14:21.480 | 99.99th=[ 396] 00:14:21.480 bw ( KiB/s): min=25384, max=36848, per=26.00%, avg=28025.60, stdev=4945.53, samples=5 00:14:21.480 iops : min= 6346, max= 9212, avg=7006.40, stdev=1236.38, samples=5 00:14:21.480 lat (usec) : 100=36.38%, 250=63.55%, 500=0.05%, 750=0.01% 00:14:21.480 cpu : usr=2.41%, sys=8.11%, ctx=22511, majf=0, minf=1 00:14:21.480 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:21.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.480 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.480 issued rwts: total=22507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:21.480 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:21.480 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3015814: Wed May 15 11:37:52 2024 00:14:21.480 read: IOPS=7796, BW=30.5MiB/s (31.9MB/s)(99.9MiB/3280msec) 00:14:21.480 slat (usec): min=7, max=17813, avg=12.53, stdev=233.11 00:14:21.480 clat (usec): min=45, max=403, avg=114.30, stdev=35.29 00:14:21.480 lat (usec): min=60, max=17929, avg=126.83, stdev=235.67 00:14:21.480 clat percentiles (usec): 00:14:21.480 | 1.00th=[ 58], 5.00th=[ 62], 10.00th=[ 68], 20.00th=[ 80], 00:14:21.480 | 30.00th=[ 85], 40.00th=[ 90], 50.00th=[ 133], 60.00th=[ 141], 00:14:21.480 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 159], 00:14:21.480 | 99.00th=[ 180], 99.50th=[ 190], 99.90th=[ 217], 99.95th=[ 241], 00:14:21.480 | 99.99th=[ 338] 00:14:21.480 bw ( KiB/s): min=25632, max=37566, per=27.97%, avg=30141.00, stdev=5074.53, samples=6 00:14:21.480 iops : min= 6408, max= 9391, avg=7535.17, stdev=1268.49, samples=6 00:14:21.480 lat (usec) : 50=0.01%, 100=45.96%, 250=53.99%, 500=0.03% 00:14:21.480 cpu : usr=2.65%, sys=8.57%, ctx=25579, majf=0, minf=1 00:14:21.480 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:21.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.480 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.481 issued rwts: total=25572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:21.481 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:21.481 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3015815: Wed May 15 11:37:52 2024 00:14:21.481 read: IOPS=7866, BW=30.7MiB/s (32.2MB/s)(88.2MiB/2870msec) 00:14:21.481 slat (usec): min=8, max=16840, avg=10.63, stdev=153.84 00:14:21.481 clat (usec): min=59, max=450, avg=114.99, stdev=28.91 00:14:21.481 lat (usec): min=80, max=16958, avg=125.62, stdev=156.78 00:14:21.481 clat percentiles (usec): 00:14:21.481 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 87], 00:14:21.481 | 30.00th=[ 90], 40.00th=[ 94], 50.00th=[ 102], 60.00th=[ 137], 00:14:21.481 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 155], 00:14:21.481 | 99.00th=[ 178], 99.50th=[ 190], 99.90th=[ 208], 99.95th=[ 217], 00:14:21.481 | 99.99th=[ 388] 00:14:21.481 bw ( KiB/s): min=26184, max=36728, per=29.80%, avg=32123.20, stdev=4359.22, samples=5 00:14:21.481 iops : min= 6546, max= 9182, avg=8030.80, stdev=1089.80, samples=5 00:14:21.481 lat (usec) : 100=48.37%, 250=51.59%, 500=0.03% 00:14:21.481 cpu : usr=2.75%, sys=8.75%, ctx=22579, majf=0, minf=1 00:14:21.481 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:21.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.481 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.481 issued rwts: total=22576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:21.481 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:21.481 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3015816: Wed May 15 11:37:52 2024 00:14:21.481 read: IOPS=6600, BW=25.8MiB/s (27.0MB/s)(69.2MiB/2686msec) 00:14:21.481 slat (nsec): min=7385, max=38119, avg=9505.06, stdev=1412.84 00:14:21.481 clat (usec): min=76, max=449, avg=139.40, stdev=20.99 00:14:21.481 lat (usec): min=85, max=458, avg=148.91, stdev=21.06 00:14:21.481 clat percentiles (usec): 00:14:21.481 | 1.00th=[ 89], 5.00th=[ 96], 10.00th=[ 102], 20.00th=[ 135], 00:14:21.481 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 145], 00:14:21.481 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 165], 00:14:21.481 | 99.00th=[ 192], 99.50th=[ 200], 99.90th=[ 217], 99.95th=[ 235], 00:14:21.481 | 99.99th=[ 404] 00:14:21.481 bw ( KiB/s): min=25080, max=31648, per=24.87%, avg=26806.40, stdev=2738.43, samples=5 00:14:21.481 iops : min= 6270, max= 7912, avg=6701.60, stdev=684.61, samples=5 00:14:21.481 lat (usec) : 100=8.65%, 250=91.31%, 500=0.03% 00:14:21.481 cpu : usr=2.35%, sys=7.56%, ctx=17729, majf=0, minf=2 00:14:21.481 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:21.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.481 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.481 issued rwts: total=17728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:21.481 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:21.481 00:14:21.481 Run status group 0 (all jobs): 00:14:21.481 READ: bw=105MiB/s (110MB/s), 25.8MiB/s-30.7MiB/s (27.0MB/s-32.2MB/s), io=345MiB (362MB), run=2686-3280msec 00:14:21.481 00:14:21.481 Disk stats (read/write): 00:14:21.481 nvme0n1: ios=20211/0, merge=0/0, ticks=2484/0, in_queue=2484, util=93.22% 00:14:21.481 nvme0n2: ios=23329/0, merge=0/0, ticks=2636/0, in_queue=2636, util=93.71% 00:14:21.481 nvme0n3: ios=22313/0, merge=0/0, ticks=2420/0, in_queue=2420, util=95.27% 00:14:21.481 nvme0n4: ios=17232/0, merge=0/0, ticks=2276/0, in_queue=2276, util=96.45% 00:14:21.738 11:37:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:21.738 11:37:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:21.995 11:37:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:21.995 11:37:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:22.252 11:37:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:22.252 11:37:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:22.252 11:37:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:22.252 11:37:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:22.509 11:37:53 -- target/fio.sh@69 -- # fio_status=0 00:14:22.509 11:37:53 -- target/fio.sh@70 -- # wait 3015700 00:14:22.509 11:37:53 -- target/fio.sh@70 -- # fio_status=4 00:14:22.509 11:37:53 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.438 11:37:54 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:23.438 11:37:54 -- common/autotest_common.sh@1215 -- # local i=0 00:14:23.438 11:37:54 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:23.438 11:37:54 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.438 11:37:54 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:23.438 11:37:54 -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.438 11:37:54 -- common/autotest_common.sh@1227 -- # return 0 00:14:23.438 11:37:54 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:23.438 11:37:54 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:23.438 nvmf hotplug test: fio failed as expected 00:14:23.438 11:37:54 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.694 11:37:54 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:23.694 11:37:54 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:23.694 11:37:54 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:23.694 11:37:54 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:23.694 11:37:54 -- target/fio.sh@91 -- # nvmftestfini 00:14:23.694 11:37:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:23.694 11:37:54 -- nvmf/common.sh@117 -- # sync 00:14:23.694 11:37:54 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:23.694 11:37:54 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:23.694 11:37:54 -- nvmf/common.sh@120 -- # set +e 00:14:23.694 11:37:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:23.694 11:37:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:23.694 rmmod nvme_rdma 00:14:23.694 rmmod nvme_fabrics 00:14:23.694 11:37:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:23.694 11:37:54 -- nvmf/common.sh@124 -- # set -e 00:14:23.694 11:37:54 -- nvmf/common.sh@125 -- # return 0 00:14:23.694 11:37:54 -- nvmf/common.sh@478 -- # '[' -n 3013340 ']' 00:14:23.694 11:37:54 -- nvmf/common.sh@479 -- # killprocess 3013340 00:14:23.694 11:37:54 -- common/autotest_common.sh@946 -- # '[' -z 3013340 ']' 00:14:23.694 11:37:54 -- common/autotest_common.sh@950 -- # kill -0 3013340 00:14:23.694 11:37:54 -- common/autotest_common.sh@951 -- # uname 00:14:23.694 11:37:54 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:23.694 11:37:54 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3013340 00:14:23.951 11:37:54 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:23.951 11:37:54 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:23.951 11:37:54 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3013340' 00:14:23.951 killing process with pid 3013340 00:14:23.951 11:37:54 -- common/autotest_common.sh@965 -- # kill 3013340 00:14:23.951 [2024-05-15 11:37:54.460042] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:23.951 11:37:54 -- common/autotest_common.sh@970 -- # wait 3013340 00:14:23.951 [2024-05-15 11:37:54.548656] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:14:24.209 11:37:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:24.209 11:37:54 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:14:24.209 00:14:24.209 real 0m25.963s 00:14:24.209 user 1m37.213s 00:14:24.209 sys 0m9.864s 00:14:24.209 11:37:54 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:24.209 11:37:54 -- common/autotest_common.sh@10 -- # set +x 00:14:24.209 ************************************ 00:14:24.209 END TEST nvmf_fio_target 00:14:24.209 ************************************ 00:14:24.209 11:37:54 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:14:24.209 11:37:54 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:24.209 11:37:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:24.209 11:37:54 -- common/autotest_common.sh@10 -- # set +x 00:14:24.209 ************************************ 00:14:24.209 START TEST nvmf_bdevio 00:14:24.209 ************************************ 00:14:24.209 11:37:54 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:14:24.209 * Looking for test storage... 00:14:24.467 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:24.467 11:37:54 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.467 11:37:54 -- nvmf/common.sh@7 -- # uname -s 00:14:24.467 11:37:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.467 11:37:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.467 11:37:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.467 11:37:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.467 11:37:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.467 11:37:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.467 11:37:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.467 11:37:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.467 11:37:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.467 11:37:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.467 11:37:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:14:24.467 11:37:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:14:24.467 11:37:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.467 11:37:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.467 11:37:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.467 11:37:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.467 11:37:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:24.467 11:37:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.467 11:37:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.467 11:37:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.467 11:37:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.467 11:37:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.467 11:37:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.467 11:37:55 -- paths/export.sh@5 -- # export PATH 00:14:24.467 11:37:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.467 11:37:55 -- nvmf/common.sh@47 -- # : 0 00:14:24.467 11:37:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:24.467 11:37:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:24.467 11:37:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.467 11:37:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.467 11:37:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.467 11:37:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:24.467 11:37:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:24.467 11:37:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:24.467 11:37:55 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:24.467 11:37:55 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:24.467 11:37:55 -- target/bdevio.sh@14 -- # nvmftestinit 00:14:24.467 11:37:55 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:14:24.467 11:37:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.467 11:37:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:24.467 11:37:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:24.467 11:37:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:24.467 11:37:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.467 11:37:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.467 11:37:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.467 11:37:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:24.467 11:37:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:24.467 11:37:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:24.467 11:37:55 -- common/autotest_common.sh@10 -- # set +x 00:14:31.031 11:38:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:31.031 11:38:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:31.031 11:38:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:31.031 11:38:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:31.031 11:38:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:31.031 11:38:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:31.031 11:38:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:31.031 11:38:01 -- nvmf/common.sh@295 -- # net_devs=() 00:14:31.031 11:38:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:31.031 11:38:01 -- nvmf/common.sh@296 -- # e810=() 00:14:31.031 11:38:01 -- nvmf/common.sh@296 -- # local -ga e810 00:14:31.031 11:38:01 -- nvmf/common.sh@297 -- # x722=() 00:14:31.031 11:38:01 -- nvmf/common.sh@297 -- # local -ga x722 00:14:31.031 11:38:01 -- nvmf/common.sh@298 -- # mlx=() 00:14:31.031 11:38:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:31.031 11:38:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.031 11:38:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.031 11:38:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.032 11:38:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.032 11:38:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.032 11:38:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.032 11:38:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.032 11:38:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.032 11:38:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.032 11:38:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.032 11:38:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.032 11:38:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:31.032 11:38:01 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:31.032 11:38:01 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:31.032 11:38:01 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:31.032 11:38:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:31.032 11:38:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.032 11:38:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:31.032 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:31.032 11:38:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:31.032 11:38:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.032 11:38:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:31.032 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:31.032 11:38:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:31.032 11:38:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:31.032 11:38:01 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.032 11:38:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.032 11:38:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:31.032 11:38:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.032 11:38:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:31.032 Found net devices under 0000:18:00.0: mlx_0_0 00:14:31.032 11:38:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.032 11:38:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.032 11:38:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.032 11:38:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:31.032 11:38:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.032 11:38:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:31.032 Found net devices under 0000:18:00.1: mlx_0_1 00:14:31.032 11:38:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.032 11:38:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:31.032 11:38:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:31.032 11:38:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@409 -- # rdma_device_init 00:14:31.032 11:38:01 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:14:31.032 11:38:01 -- nvmf/common.sh@58 -- # uname 00:14:31.032 11:38:01 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:31.032 11:38:01 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:31.032 11:38:01 -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:31.032 11:38:01 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:31.032 11:38:01 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:31.032 11:38:01 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:31.032 11:38:01 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:31.032 11:38:01 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:31.032 11:38:01 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:14:31.032 11:38:01 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:31.032 11:38:01 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:31.032 11:38:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:31.032 11:38:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:31.032 11:38:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:31.032 11:38:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:31.032 11:38:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:31.032 11:38:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:31.032 11:38:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.032 11:38:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:31.032 11:38:01 -- nvmf/common.sh@105 -- # continue 2 00:14:31.032 11:38:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:31.032 11:38:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.032 11:38:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.032 11:38:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:31.032 11:38:01 -- nvmf/common.sh@105 -- # continue 2 00:14:31.032 11:38:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:31.032 11:38:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:31.032 11:38:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:31.032 11:38:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:31.032 11:38:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:31.032 11:38:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:31.032 11:38:01 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:31.032 11:38:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:31.032 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:31.032 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:14:31.032 altname enp24s0f0np0 00:14:31.032 altname ens785f0np0 00:14:31.032 inet 192.168.100.8/24 scope global mlx_0_0 00:14:31.032 valid_lft forever preferred_lft forever 00:14:31.032 11:38:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:31.032 11:38:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:31.032 11:38:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:31.032 11:38:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:31.032 11:38:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:31.032 11:38:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:31.032 11:38:01 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:31.032 11:38:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:31.032 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:31.032 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:14:31.032 altname enp24s0f1np1 00:14:31.032 altname ens785f1np1 00:14:31.032 inet 192.168.100.9/24 scope global mlx_0_1 00:14:31.032 valid_lft forever preferred_lft forever 00:14:31.032 11:38:01 -- nvmf/common.sh@411 -- # return 0 00:14:31.032 11:38:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:31.032 11:38:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:31.032 11:38:01 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:14:31.032 11:38:01 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:31.032 11:38:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:31.032 11:38:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:31.032 11:38:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:31.032 11:38:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:31.032 11:38:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:31.032 11:38:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:31.032 11:38:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.032 11:38:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:31.032 11:38:01 -- nvmf/common.sh@105 -- # continue 2 00:14:31.032 11:38:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:31.032 11:38:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.032 11:38:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.032 11:38:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:31.032 11:38:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:31.032 11:38:01 -- nvmf/common.sh@105 -- # continue 2 00:14:31.032 11:38:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:31.033 11:38:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:31.033 11:38:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:31.033 11:38:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:31.033 11:38:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:31.033 11:38:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:31.033 11:38:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:31.033 11:38:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:31.033 11:38:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:31.033 11:38:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:31.033 11:38:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:31.033 11:38:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:31.033 11:38:01 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:14:31.033 192.168.100.9' 00:14:31.033 11:38:01 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:31.033 192.168.100.9' 00:14:31.033 11:38:01 -- nvmf/common.sh@446 -- # head -n 1 00:14:31.033 11:38:01 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:31.033 11:38:01 -- nvmf/common.sh@447 -- # tail -n +2 00:14:31.033 11:38:01 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:14:31.033 192.168.100.9' 00:14:31.033 11:38:01 -- nvmf/common.sh@447 -- # head -n 1 00:14:31.033 11:38:01 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:31.033 11:38:01 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:14:31.033 11:38:01 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:31.033 11:38:01 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:14:31.033 11:38:01 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:14:31.033 11:38:01 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:14:31.033 11:38:01 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:31.033 11:38:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:31.033 11:38:01 -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:31.033 11:38:01 -- common/autotest_common.sh@10 -- # set +x 00:14:31.033 11:38:01 -- nvmf/common.sh@470 -- # nvmfpid=3019478 00:14:31.033 11:38:01 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:31.033 11:38:01 -- nvmf/common.sh@471 -- # waitforlisten 3019478 00:14:31.033 11:38:01 -- common/autotest_common.sh@827 -- # '[' -z 3019478 ']' 00:14:31.033 11:38:01 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.033 11:38:01 -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:31.033 11:38:01 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.033 11:38:01 -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:31.033 11:38:01 -- common/autotest_common.sh@10 -- # set +x 00:14:31.033 [2024-05-15 11:38:01.331928] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:14:31.033 [2024-05-15 11:38:01.331981] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.033 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.033 [2024-05-15 11:38:01.402920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.033 [2024-05-15 11:38:01.490702] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.033 [2024-05-15 11:38:01.490748] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.033 [2024-05-15 11:38:01.490758] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.033 [2024-05-15 11:38:01.490768] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.033 [2024-05-15 11:38:01.490776] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.033 [2024-05-15 11:38:01.490854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:31.033 [2024-05-15 11:38:01.490953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:31.033 [2024-05-15 11:38:01.491062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.033 [2024-05-15 11:38:01.491077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:31.600 11:38:02 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:31.600 11:38:02 -- common/autotest_common.sh@860 -- # return 0 00:14:31.600 11:38:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:31.601 11:38:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:31.601 11:38:02 -- common/autotest_common.sh@10 -- # set +x 00:14:31.601 11:38:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.601 11:38:02 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:31.601 11:38:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.601 11:38:02 -- common/autotest_common.sh@10 -- # set +x 00:14:31.601 [2024-05-15 11:38:02.231512] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe217e0/0xe25cd0) succeed. 00:14:31.601 [2024-05-15 11:38:02.242151] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe22e20/0xe67360) succeed. 00:14:31.859 11:38:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.859 11:38:02 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:31.859 11:38:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.859 11:38:02 -- common/autotest_common.sh@10 -- # set +x 00:14:31.859 Malloc0 00:14:31.859 11:38:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.859 11:38:02 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:31.859 11:38:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.859 11:38:02 -- common/autotest_common.sh@10 -- # set +x 00:14:31.859 11:38:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.859 11:38:02 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:31.859 11:38:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.859 11:38:02 -- common/autotest_common.sh@10 -- # set +x 00:14:31.859 11:38:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.859 11:38:02 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:31.859 11:38:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.859 11:38:02 -- common/autotest_common.sh@10 -- # set +x 00:14:31.859 [2024-05-15 11:38:02.421069] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:31.859 [2024-05-15 11:38:02.421461] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:31.859 11:38:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.859 11:38:02 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:31.859 11:38:02 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:31.859 11:38:02 -- nvmf/common.sh@521 -- # config=() 00:14:31.859 11:38:02 -- nvmf/common.sh@521 -- # local subsystem config 00:14:31.859 11:38:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:31.859 11:38:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:31.859 { 00:14:31.859 "params": { 00:14:31.859 "name": "Nvme$subsystem", 00:14:31.859 "trtype": "$TEST_TRANSPORT", 00:14:31.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:31.859 "adrfam": "ipv4", 00:14:31.859 "trsvcid": "$NVMF_PORT", 00:14:31.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:31.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:31.859 "hdgst": ${hdgst:-false}, 00:14:31.859 "ddgst": ${ddgst:-false} 00:14:31.860 }, 00:14:31.860 "method": "bdev_nvme_attach_controller" 00:14:31.860 } 00:14:31.860 EOF 00:14:31.860 )") 00:14:31.860 11:38:02 -- nvmf/common.sh@543 -- # cat 00:14:31.860 11:38:02 -- nvmf/common.sh@545 -- # jq . 00:14:31.860 11:38:02 -- nvmf/common.sh@546 -- # IFS=, 00:14:31.860 11:38:02 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:31.860 "params": { 00:14:31.860 "name": "Nvme1", 00:14:31.860 "trtype": "rdma", 00:14:31.860 "traddr": "192.168.100.8", 00:14:31.860 "adrfam": "ipv4", 00:14:31.860 "trsvcid": "4420", 00:14:31.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:31.860 "hdgst": false, 00:14:31.860 "ddgst": false 00:14:31.860 }, 00:14:31.860 "method": "bdev_nvme_attach_controller" 00:14:31.860 }' 00:14:31.860 [2024-05-15 11:38:02.475020] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:14:31.860 [2024-05-15 11:38:02.475089] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3019676 ] 00:14:31.860 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.860 [2024-05-15 11:38:02.548916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:32.118 [2024-05-15 11:38:02.637404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.118 [2024-05-15 11:38:02.637489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.119 [2024-05-15 11:38:02.637492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.119 I/O targets: 00:14:32.119 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:32.119 00:14:32.119 00:14:32.119 CUnit - A unit testing framework for C - Version 2.1-3 00:14:32.119 http://cunit.sourceforge.net/ 00:14:32.119 00:14:32.119 00:14:32.119 Suite: bdevio tests on: Nvme1n1 00:14:32.119 Test: blockdev write read block ...passed 00:14:32.119 Test: blockdev write zeroes read block ...passed 00:14:32.119 Test: blockdev write zeroes read no split ...passed 00:14:32.119 Test: blockdev write zeroes read split ...passed 00:14:32.119 Test: blockdev write zeroes read split partial ...passed 00:14:32.119 Test: blockdev reset ...[2024-05-15 11:38:02.856757] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:32.119 [2024-05-15 11:38:02.879698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:32.377 [2024-05-15 11:38:02.906294] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:32.377 passed 00:14:32.377 Test: blockdev write read 8 blocks ...passed 00:14:32.377 Test: blockdev write read size > 128k ...passed 00:14:32.377 Test: blockdev write read invalid size ...passed 00:14:32.377 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:32.377 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:32.377 Test: blockdev write read max offset ...passed 00:14:32.377 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:32.377 Test: blockdev writev readv 8 blocks ...passed 00:14:32.377 Test: blockdev writev readv 30 x 1block ...passed 00:14:32.377 Test: blockdev writev readv block ...passed 00:14:32.377 Test: blockdev writev readv size > 128k ...passed 00:14:32.377 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:32.377 Test: blockdev comparev and writev ...[2024-05-15 11:38:02.909272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:32.377 [2024-05-15 11:38:02.909304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:32.377 [2024-05-15 11:38:02.909317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:32.378 [2024-05-15 11:38:02.909327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:32.378 [2024-05-15 11:38:02.909486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:32.378 [2024-05-15 11:38:02.909498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:32.378 [2024-05-15 11:38:02.909508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:32.378 [2024-05-15 11:38:02.909518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:32.378 [2024-05-15 11:38:02.909698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:32.378 [2024-05-15 11:38:02.909710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:32.378 [2024-05-15 11:38:02.909720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:32.378 [2024-05-15 11:38:02.909731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:32.378 [2024-05-15 11:38:02.909894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:32.378 [2024-05-15 11:38:02.909906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:32.378 [2024-05-15 11:38:02.909917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:32.378 [2024-05-15 11:38:02.909928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:32.378 passed 00:14:32.378 Test: blockdev nvme passthru rw ...passed 00:14:32.378 Test: blockdev nvme passthru vendor specific ...[2024-05-15 11:38:02.910197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:32.378 [2024-05-15 11:38:02.910210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:32.378 [2024-05-15 11:38:02.910255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:32.378 [2024-05-15 11:38:02.910267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:32.378 [2024-05-15 11:38:02.910313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:32.378 [2024-05-15 11:38:02.910324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:32.378 [2024-05-15 11:38:02.910371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:32.378 [2024-05-15 11:38:02.910383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:32.378 passed 00:14:32.378 Test: blockdev nvme admin passthru ...passed 00:14:32.378 Test: blockdev copy ...passed 00:14:32.378 00:14:32.378 Run Summary: Type Total Ran Passed Failed Inactive 00:14:32.378 suites 1 1 n/a 0 0 00:14:32.378 tests 23 23 23 0 0 00:14:32.378 asserts 152 152 152 0 n/a 00:14:32.378 00:14:32.378 Elapsed time = 0.175 seconds 00:14:32.378 11:38:03 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.378 11:38:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.635 11:38:03 -- common/autotest_common.sh@10 -- # set +x 00:14:32.635 11:38:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.635 11:38:03 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:32.635 11:38:03 -- target/bdevio.sh@30 -- # nvmftestfini 00:14:32.635 11:38:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:32.635 11:38:03 -- nvmf/common.sh@117 -- # sync 00:14:32.635 11:38:03 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:32.635 11:38:03 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:32.635 11:38:03 -- nvmf/common.sh@120 -- # set +e 00:14:32.635 11:38:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:32.635 11:38:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:32.635 rmmod nvme_rdma 00:14:32.635 rmmod nvme_fabrics 00:14:32.635 11:38:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.635 11:38:03 -- nvmf/common.sh@124 -- # set -e 00:14:32.635 11:38:03 -- nvmf/common.sh@125 -- # return 0 00:14:32.635 11:38:03 -- nvmf/common.sh@478 -- # '[' -n 3019478 ']' 00:14:32.635 11:38:03 -- nvmf/common.sh@479 -- # killprocess 3019478 00:14:32.635 11:38:03 -- common/autotest_common.sh@946 -- # '[' -z 3019478 ']' 00:14:32.635 11:38:03 -- common/autotest_common.sh@950 -- # kill -0 3019478 00:14:32.635 11:38:03 -- common/autotest_common.sh@951 -- # uname 00:14:32.635 11:38:03 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:32.635 11:38:03 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3019478 00:14:32.635 11:38:03 -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:14:32.635 11:38:03 -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:14:32.635 11:38:03 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3019478' 00:14:32.635 killing process with pid 3019478 00:14:32.635 11:38:03 -- common/autotest_common.sh@965 -- # kill 3019478 00:14:32.635 [2024-05-15 11:38:03.266899] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:32.635 11:38:03 -- common/autotest_common.sh@970 -- # wait 3019478 00:14:32.635 [2024-05-15 11:38:03.351289] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:14:32.894 11:38:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:32.894 11:38:03 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:14:32.894 00:14:32.894 real 0m8.738s 00:14:32.894 user 0m11.216s 00:14:32.894 sys 0m5.451s 00:14:32.894 11:38:03 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:32.894 11:38:03 -- common/autotest_common.sh@10 -- # set +x 00:14:32.894 ************************************ 00:14:32.894 END TEST nvmf_bdevio 00:14:32.894 ************************************ 00:14:32.894 11:38:03 -- nvmf/nvmf.sh@58 -- # '[' rdma = tcp ']' 00:14:32.894 11:38:03 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:14:32.894 11:38:03 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:14:32.894 11:38:03 -- nvmf/nvmf.sh@71 -- # '[' rdma = tcp ']' 00:14:32.894 11:38:03 -- nvmf/nvmf.sh@77 -- # [[ rdma == \r\d\m\a ]] 00:14:32.894 11:38:03 -- nvmf/nvmf.sh@78 -- # run_test nvmf_device_removal test/nvmf/target/device_removal.sh --transport=rdma 00:14:32.894 11:38:03 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:32.894 11:38:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:32.894 11:38:03 -- common/autotest_common.sh@10 -- # set +x 00:14:33.153 ************************************ 00:14:33.153 START TEST nvmf_device_removal 00:14:33.153 ************************************ 00:14:33.153 11:38:03 -- common/autotest_common.sh@1121 -- # test/nvmf/target/device_removal.sh --transport=rdma 00:14:33.153 * Looking for test storage... 00:14:33.153 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:33.153 11:38:03 -- target/device_removal.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:14:33.153 11:38:03 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:33.153 11:38:03 -- common/autotest_common.sh@34 -- # set -e 00:14:33.153 11:38:03 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:33.153 11:38:03 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:33.153 11:38:03 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:14:33.153 11:38:03 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:14:33.153 11:38:03 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:14:33.153 11:38:03 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:33.153 11:38:03 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:14:33.153 11:38:03 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:33.153 11:38:03 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:33.153 11:38:03 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:33.153 11:38:03 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:33.153 11:38:03 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:33.153 11:38:03 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:33.154 11:38:03 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:33.154 11:38:03 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:33.154 11:38:03 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:33.154 11:38:03 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:33.154 11:38:03 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:33.154 11:38:03 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:33.154 11:38:03 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:33.154 11:38:03 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:33.154 11:38:03 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:14:33.154 11:38:03 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:33.154 11:38:03 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:14:33.154 11:38:03 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:14:33.154 11:38:03 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:14:33.154 11:38:03 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:14:33.154 11:38:03 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:33.154 11:38:03 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:14:33.154 11:38:03 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:14:33.154 11:38:03 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:33.154 11:38:03 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:33.154 11:38:03 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:14:33.154 11:38:03 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:14:33.154 11:38:03 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:14:33.154 11:38:03 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:14:33.154 11:38:03 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:14:33.154 11:38:03 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:14:33.154 11:38:03 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:14:33.154 11:38:03 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:14:33.154 11:38:03 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:14:33.154 11:38:03 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:14:33.154 11:38:03 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:14:33.154 11:38:03 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:14:33.154 11:38:03 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:14:33.154 11:38:03 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:14:33.154 11:38:03 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:14:33.154 11:38:03 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:14:33.154 11:38:03 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:33.154 11:38:03 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:14:33.154 11:38:03 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:14:33.154 11:38:03 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:14:33.154 11:38:03 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:33.154 11:38:03 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:14:33.154 11:38:03 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:14:33.154 11:38:03 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:14:33.154 11:38:03 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:14:33.154 11:38:03 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:14:33.154 11:38:03 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:14:33.154 11:38:03 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:14:33.154 11:38:03 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:14:33.154 11:38:03 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:14:33.154 11:38:03 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:14:33.154 11:38:03 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:14:33.154 11:38:03 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:14:33.154 11:38:03 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:14:33.154 11:38:03 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:14:33.154 11:38:03 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:14:33.154 11:38:03 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:14:33.154 11:38:03 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:14:33.154 11:38:03 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:14:33.154 11:38:03 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:14:33.154 11:38:03 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:33.154 11:38:03 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:14:33.154 11:38:03 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:14:33.154 11:38:03 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:14:33.154 11:38:03 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:14:33.154 11:38:03 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:14:33.154 11:38:03 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:14:33.154 11:38:03 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:14:33.154 11:38:03 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:14:33.154 11:38:03 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:14:33.154 11:38:03 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:14:33.154 11:38:03 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:14:33.154 11:38:03 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:33.154 11:38:03 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:14:33.154 11:38:03 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:14:33.154 11:38:03 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:14:33.154 11:38:03 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:14:33.154 11:38:03 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:14:33.154 11:38:03 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:14:33.154 11:38:03 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:14:33.154 11:38:03 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:14:33.154 11:38:03 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:14:33.154 11:38:03 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:14:33.154 11:38:03 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:33.154 11:38:03 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:33.154 11:38:03 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:33.154 11:38:03 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:33.154 11:38:03 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:33.154 11:38:03 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:33.154 11:38:03 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:14:33.154 11:38:03 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:33.154 #define SPDK_CONFIG_H 00:14:33.154 #define SPDK_CONFIG_APPS 1 00:14:33.154 #define SPDK_CONFIG_ARCH native 00:14:33.154 #undef SPDK_CONFIG_ASAN 00:14:33.154 #undef SPDK_CONFIG_AVAHI 00:14:33.154 #undef SPDK_CONFIG_CET 00:14:33.154 #define SPDK_CONFIG_COVERAGE 1 00:14:33.154 #define SPDK_CONFIG_CROSS_PREFIX 00:14:33.154 #undef SPDK_CONFIG_CRYPTO 00:14:33.154 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:33.154 #undef SPDK_CONFIG_CUSTOMOCF 00:14:33.154 #undef SPDK_CONFIG_DAOS 00:14:33.154 #define SPDK_CONFIG_DAOS_DIR 00:14:33.154 #define SPDK_CONFIG_DEBUG 1 00:14:33.154 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:33.154 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:14:33.154 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:33.154 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:33.154 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:33.154 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:14:33.154 #define SPDK_CONFIG_EXAMPLES 1 00:14:33.154 #undef SPDK_CONFIG_FC 00:14:33.154 #define SPDK_CONFIG_FC_PATH 00:14:33.154 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:33.154 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:33.154 #undef SPDK_CONFIG_FUSE 00:14:33.154 #undef SPDK_CONFIG_FUZZER 00:14:33.154 #define SPDK_CONFIG_FUZZER_LIB 00:14:33.154 #undef SPDK_CONFIG_GOLANG 00:14:33.154 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:33.154 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:33.154 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:33.154 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:14:33.154 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:33.154 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:33.154 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:33.154 #define SPDK_CONFIG_IDXD 1 00:14:33.154 #undef SPDK_CONFIG_IDXD_KERNEL 00:14:33.154 #undef SPDK_CONFIG_IPSEC_MB 00:14:33.154 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:33.154 #define SPDK_CONFIG_ISAL 1 00:14:33.154 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:33.154 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:33.154 #define SPDK_CONFIG_LIBDIR 00:14:33.154 #undef SPDK_CONFIG_LTO 00:14:33.154 #define SPDK_CONFIG_MAX_LCORES 00:14:33.154 #define SPDK_CONFIG_NVME_CUSE 1 00:14:33.154 #undef SPDK_CONFIG_OCF 00:14:33.154 #define SPDK_CONFIG_OCF_PATH 00:14:33.154 #define SPDK_CONFIG_OPENSSL_PATH 00:14:33.154 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:33.154 #define SPDK_CONFIG_PGO_DIR 00:14:33.154 #undef SPDK_CONFIG_PGO_USE 00:14:33.154 #define SPDK_CONFIG_PREFIX /usr/local 00:14:33.154 #undef SPDK_CONFIG_RAID5F 00:14:33.154 #undef SPDK_CONFIG_RBD 00:14:33.154 #define SPDK_CONFIG_RDMA 1 00:14:33.154 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:33.154 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:33.154 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:33.154 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:33.154 #define SPDK_CONFIG_SHARED 1 00:14:33.154 #undef SPDK_CONFIG_SMA 00:14:33.154 #define SPDK_CONFIG_TESTS 1 00:14:33.154 #undef SPDK_CONFIG_TSAN 00:14:33.154 #define SPDK_CONFIG_UBLK 1 00:14:33.154 #define SPDK_CONFIG_UBSAN 1 00:14:33.154 #undef SPDK_CONFIG_UNIT_TESTS 00:14:33.154 #undef SPDK_CONFIG_URING 00:14:33.154 #define SPDK_CONFIG_URING_PATH 00:14:33.154 #undef SPDK_CONFIG_URING_ZNS 00:14:33.154 #undef SPDK_CONFIG_USDT 00:14:33.154 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:33.154 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:33.154 #undef SPDK_CONFIG_VFIO_USER 00:14:33.154 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:33.154 #define SPDK_CONFIG_VHOST 1 00:14:33.154 #define SPDK_CONFIG_VIRTIO 1 00:14:33.154 #undef SPDK_CONFIG_VTUNE 00:14:33.154 #define SPDK_CONFIG_VTUNE_DIR 00:14:33.154 #define SPDK_CONFIG_WERROR 1 00:14:33.154 #define SPDK_CONFIG_WPDK_DIR 00:14:33.154 #undef SPDK_CONFIG_XNVME 00:14:33.154 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:33.154 11:38:03 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:33.154 11:38:03 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:33.154 11:38:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.155 11:38:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.155 11:38:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.155 11:38:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.155 11:38:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.155 11:38:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.155 11:38:03 -- paths/export.sh@5 -- # export PATH 00:14:33.155 11:38:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.155 11:38:03 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:14:33.155 11:38:03 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:14:33.155 11:38:03 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:14:33.155 11:38:03 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:14:33.155 11:38:03 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:14:33.155 11:38:03 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:14:33.155 11:38:03 -- pm/common@64 -- # TEST_TAG=N/A 00:14:33.155 11:38:03 -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:14:33.155 11:38:03 -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:14:33.155 11:38:03 -- pm/common@68 -- # uname -s 00:14:33.155 11:38:03 -- pm/common@68 -- # PM_OS=Linux 00:14:33.155 11:38:03 -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:33.155 11:38:03 -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:33.155 11:38:03 -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:33.155 11:38:03 -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:33.155 11:38:03 -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:33.155 11:38:03 -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:33.155 11:38:03 -- pm/common@76 -- # SUDO[0]= 00:14:33.155 11:38:03 -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:33.155 11:38:03 -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:33.155 11:38:03 -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:33.155 11:38:03 -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:33.155 11:38:03 -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:14:33.155 11:38:03 -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:14:33.155 11:38:03 -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:14:33.155 11:38:03 -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:14:33.155 11:38:03 -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:14:33.155 11:38:03 -- common/autotest_common.sh@57 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:14:33.155 11:38:03 -- common/autotest_common.sh@61 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:33.155 11:38:03 -- common/autotest_common.sh@63 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:14:33.155 11:38:03 -- common/autotest_common.sh@65 -- # : 1 00:14:33.155 11:38:03 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:33.155 11:38:03 -- common/autotest_common.sh@67 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:14:33.155 11:38:03 -- common/autotest_common.sh@69 -- # : 00:14:33.155 11:38:03 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:14:33.155 11:38:03 -- common/autotest_common.sh@71 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:14:33.155 11:38:03 -- common/autotest_common.sh@73 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:14:33.155 11:38:03 -- common/autotest_common.sh@75 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:14:33.155 11:38:03 -- common/autotest_common.sh@77 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:33.155 11:38:03 -- common/autotest_common.sh@79 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:14:33.155 11:38:03 -- common/autotest_common.sh@81 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:14:33.155 11:38:03 -- common/autotest_common.sh@83 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:14:33.155 11:38:03 -- common/autotest_common.sh@85 -- # : 1 00:14:33.155 11:38:03 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:14:33.155 11:38:03 -- common/autotest_common.sh@87 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:14:33.155 11:38:03 -- common/autotest_common.sh@89 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:14:33.155 11:38:03 -- common/autotest_common.sh@91 -- # : 1 00:14:33.155 11:38:03 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:14:33.155 11:38:03 -- common/autotest_common.sh@93 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:14:33.155 11:38:03 -- common/autotest_common.sh@95 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:33.155 11:38:03 -- common/autotest_common.sh@97 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:14:33.155 11:38:03 -- common/autotest_common.sh@99 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:14:33.155 11:38:03 -- common/autotest_common.sh@101 -- # : rdma 00:14:33.155 11:38:03 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:33.155 11:38:03 -- common/autotest_common.sh@103 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:14:33.155 11:38:03 -- common/autotest_common.sh@105 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:14:33.155 11:38:03 -- common/autotest_common.sh@107 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:14:33.155 11:38:03 -- common/autotest_common.sh@109 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:14:33.155 11:38:03 -- common/autotest_common.sh@111 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:14:33.155 11:38:03 -- common/autotest_common.sh@113 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:14:33.155 11:38:03 -- common/autotest_common.sh@115 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:14:33.155 11:38:03 -- common/autotest_common.sh@117 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:33.155 11:38:03 -- common/autotest_common.sh@119 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:14:33.155 11:38:03 -- common/autotest_common.sh@121 -- # : 1 00:14:33.155 11:38:03 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:14:33.155 11:38:03 -- common/autotest_common.sh@123 -- # : 00:14:33.155 11:38:03 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:33.155 11:38:03 -- common/autotest_common.sh@125 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:14:33.155 11:38:03 -- common/autotest_common.sh@127 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:14:33.155 11:38:03 -- common/autotest_common.sh@129 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:14:33.155 11:38:03 -- common/autotest_common.sh@131 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:14:33.155 11:38:03 -- common/autotest_common.sh@133 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:14:33.155 11:38:03 -- common/autotest_common.sh@135 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:14:33.155 11:38:03 -- common/autotest_common.sh@137 -- # : 00:14:33.155 11:38:03 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:14:33.155 11:38:03 -- common/autotest_common.sh@139 -- # : true 00:14:33.155 11:38:03 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:14:33.155 11:38:03 -- common/autotest_common.sh@141 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:14:33.155 11:38:03 -- common/autotest_common.sh@143 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:14:33.155 11:38:03 -- common/autotest_common.sh@145 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:14:33.155 11:38:03 -- common/autotest_common.sh@147 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:14:33.155 11:38:03 -- common/autotest_common.sh@149 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:14:33.155 11:38:03 -- common/autotest_common.sh@151 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:14:33.155 11:38:03 -- common/autotest_common.sh@153 -- # : mlx5 00:14:33.155 11:38:03 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:14:33.155 11:38:03 -- common/autotest_common.sh@155 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:14:33.155 11:38:03 -- common/autotest_common.sh@157 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:14:33.155 11:38:03 -- common/autotest_common.sh@159 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:14:33.155 11:38:03 -- common/autotest_common.sh@161 -- # : 0 00:14:33.155 11:38:03 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:14:33.155 11:38:03 -- common/autotest_common.sh@163 -- # : 0 00:14:33.156 11:38:03 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:14:33.156 11:38:03 -- common/autotest_common.sh@166 -- # : 00:14:33.156 11:38:03 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:14:33.156 11:38:03 -- common/autotest_common.sh@168 -- # : 0 00:14:33.156 11:38:03 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:14:33.156 11:38:03 -- common/autotest_common.sh@170 -- # : 0 00:14:33.156 11:38:03 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:33.156 11:38:03 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:14:33.156 11:38:03 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:14:33.156 11:38:03 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:14:33.156 11:38:03 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:14:33.156 11:38:03 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:33.156 11:38:03 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:33.156 11:38:03 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:33.156 11:38:03 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:33.156 11:38:03 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:33.156 11:38:03 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:33.156 11:38:03 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:14:33.156 11:38:03 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:14:33.156 11:38:03 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:33.156 11:38:03 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:14:33.156 11:38:03 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:33.156 11:38:03 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:33.156 11:38:03 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:33.156 11:38:03 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:33.156 11:38:03 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:33.156 11:38:03 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:14:33.156 11:38:03 -- common/autotest_common.sh@199 -- # cat 00:14:33.156 11:38:03 -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:14:33.156 11:38:03 -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:33.156 11:38:03 -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:33.156 11:38:03 -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:33.156 11:38:03 -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:33.156 11:38:03 -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:14:33.156 11:38:03 -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:14:33.156 11:38:03 -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:14:33.156 11:38:03 -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:14:33.156 11:38:03 -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:14:33.156 11:38:03 -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:14:33.156 11:38:03 -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:33.156 11:38:03 -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:33.156 11:38:03 -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:33.156 11:38:03 -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:33.156 11:38:03 -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:33.156 11:38:03 -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:33.156 11:38:03 -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:33.156 11:38:03 -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:33.156 11:38:03 -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:14:33.156 11:38:03 -- common/autotest_common.sh@262 -- # export valgrind= 00:14:33.156 11:38:03 -- common/autotest_common.sh@262 -- # valgrind= 00:14:33.156 11:38:03 -- common/autotest_common.sh@268 -- # uname -s 00:14:33.156 11:38:03 -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:14:33.156 11:38:03 -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:14:33.156 11:38:03 -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:14:33.156 11:38:03 -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:14:33.156 11:38:03 -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:14:33.156 11:38:03 -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:14:33.156 11:38:03 -- common/autotest_common.sh@278 -- # MAKE=make 00:14:33.156 11:38:03 -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j72 00:14:33.156 11:38:03 -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:14:33.156 11:38:03 -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:14:33.156 11:38:03 -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:14:33.156 11:38:03 -- common/autotest_common.sh@298 -- # TEST_MODE= 00:14:33.156 11:38:03 -- common/autotest_common.sh@299 -- # for i in "$@" 00:14:33.156 11:38:03 -- common/autotest_common.sh@300 -- # case "$i" in 00:14:33.156 11:38:03 -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=rdma 00:14:33.156 11:38:03 -- common/autotest_common.sh@317 -- # [[ -z 3019910 ]] 00:14:33.156 11:38:03 -- common/autotest_common.sh@317 -- # kill -0 3019910 00:14:33.415 11:38:03 -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:14:33.415 11:38:03 -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:14:33.415 11:38:03 -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:14:33.415 11:38:03 -- common/autotest_common.sh@330 -- # local mount target_dir 00:14:33.415 11:38:03 -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:14:33.415 11:38:03 -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:14:33.415 11:38:03 -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:14:33.415 11:38:03 -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:14:33.415 11:38:03 -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.rp6iKO 00:14:33.415 11:38:03 -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:33.415 11:38:03 -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:14:33.415 11:38:03 -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:14:33.415 11:38:03 -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.rp6iKO/tests/target /tmp/spdk.rp6iKO 00:14:33.415 11:38:03 -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:14:33.415 11:38:03 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:33.415 11:38:03 -- common/autotest_common.sh@326 -- # df -T 00:14:33.415 11:38:03 -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:14:33.415 11:38:03 -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:14:33.415 11:38:03 -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:14:33.415 11:38:03 -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:14:33.415 11:38:03 -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:14:33.415 11:38:03 -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:14:33.415 11:38:03 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:33.415 11:38:03 -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:14:33.415 11:38:03 -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:14:33.415 11:38:03 -- common/autotest_common.sh@361 -- # avails["$mount"]=966955008 00:14:33.415 11:38:03 -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:14:33.415 11:38:03 -- common/autotest_common.sh@362 -- # uses["$mount"]=4317474816 00:14:33.415 11:38:03 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:33.415 11:38:03 -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:14:33.415 11:38:03 -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:14:33.415 11:38:03 -- common/autotest_common.sh@361 -- # avails["$mount"]=84889227264 00:14:33.415 11:38:03 -- common/autotest_common.sh@361 -- # sizes["$mount"]=94508605440 00:14:33.415 11:38:03 -- common/autotest_common.sh@362 -- # uses["$mount"]=9619378176 00:14:33.415 11:38:03 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:33.415 11:38:03 -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:14:33.415 11:38:03 -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:14:33.415 11:38:03 -- common/autotest_common.sh@361 -- # avails["$mount"]=47241007104 00:14:33.415 11:38:03 -- common/autotest_common.sh@361 -- # sizes["$mount"]=47254302720 00:14:33.415 11:38:03 -- common/autotest_common.sh@362 -- # uses["$mount"]=13295616 00:14:33.415 11:38:03 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:33.415 11:38:03 -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:14:33.415 11:38:03 -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:14:33.415 11:38:03 -- common/autotest_common.sh@361 -- # avails["$mount"]=18878906368 00:14:33.415 11:38:03 -- common/autotest_common.sh@361 -- # sizes["$mount"]=18901721088 00:14:33.415 11:38:03 -- common/autotest_common.sh@362 -- # uses["$mount"]=22814720 00:14:33.415 11:38:03 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:33.415 11:38:03 -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:14:33.415 11:38:03 -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:14:33.415 11:38:03 -- common/autotest_common.sh@361 -- # avails["$mount"]=47253884928 00:14:33.415 11:38:03 -- common/autotest_common.sh@361 -- # sizes["$mount"]=47254302720 00:14:33.415 11:38:03 -- common/autotest_common.sh@362 -- # uses["$mount"]=417792 00:14:33.415 11:38:03 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:33.415 11:38:03 -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:14:33.415 11:38:03 -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:14:33.415 11:38:03 -- common/autotest_common.sh@361 -- # avails["$mount"]=9450856448 00:14:33.415 11:38:03 -- common/autotest_common.sh@361 -- # sizes["$mount"]=9450860544 00:14:33.415 11:38:03 -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:14:33.415 11:38:03 -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:33.415 11:38:03 -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:14:33.415 * Looking for test storage... 00:14:33.415 11:38:03 -- common/autotest_common.sh@367 -- # local target_space new_size 00:14:33.415 11:38:03 -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:14:33.415 11:38:03 -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:33.415 11:38:03 -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:33.415 11:38:03 -- common/autotest_common.sh@371 -- # mount=/ 00:14:33.415 11:38:03 -- common/autotest_common.sh@373 -- # target_space=84889227264 00:14:33.415 11:38:03 -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:14:33.415 11:38:03 -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:14:33.415 11:38:03 -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:14:33.415 11:38:03 -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:14:33.415 11:38:03 -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:14:33.415 11:38:03 -- common/autotest_common.sh@380 -- # new_size=11833970688 00:14:33.415 11:38:03 -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:14:33.415 11:38:03 -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:33.415 11:38:03 -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:33.415 11:38:03 -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:33.415 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:33.415 11:38:03 -- common/autotest_common.sh@388 -- # return 0 00:14:33.415 11:38:03 -- common/autotest_common.sh@1678 -- # set -o errtrace 00:14:33.415 11:38:03 -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:14:33.415 11:38:03 -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:33.415 11:38:03 -- common/autotest_common.sh@1682 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:33.415 11:38:03 -- common/autotest_common.sh@1683 -- # true 00:14:33.415 11:38:03 -- common/autotest_common.sh@1685 -- # xtrace_fd 00:14:33.415 11:38:03 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:14:33.415 11:38:03 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:14:33.415 11:38:03 -- common/autotest_common.sh@27 -- # exec 00:14:33.415 11:38:03 -- common/autotest_common.sh@29 -- # exec 00:14:33.415 11:38:03 -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:33.415 11:38:03 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:33.415 11:38:03 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:33.415 11:38:03 -- common/autotest_common.sh@18 -- # set -x 00:14:33.415 11:38:03 -- target/device_removal.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.415 11:38:03 -- nvmf/common.sh@7 -- # uname -s 00:14:33.415 11:38:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.415 11:38:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.415 11:38:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.415 11:38:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.415 11:38:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.415 11:38:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.415 11:38:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.415 11:38:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.415 11:38:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.415 11:38:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.415 11:38:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:14:33.415 11:38:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:14:33.415 11:38:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.415 11:38:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.415 11:38:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.415 11:38:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.415 11:38:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:33.415 11:38:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.415 11:38:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.415 11:38:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.416 11:38:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.416 11:38:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.416 11:38:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.416 11:38:03 -- paths/export.sh@5 -- # export PATH 00:14:33.416 11:38:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.416 11:38:03 -- nvmf/common.sh@47 -- # : 0 00:14:33.416 11:38:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.416 11:38:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.416 11:38:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.416 11:38:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.416 11:38:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.416 11:38:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.416 11:38:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.416 11:38:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.416 11:38:03 -- target/device_removal.sh@13 -- # tgt_core_mask=0x3 00:14:33.416 11:38:03 -- target/device_removal.sh@14 -- # bdevperf_core_mask=0x4 00:14:33.416 11:38:03 -- target/device_removal.sh@15 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:33.416 11:38:03 -- target/device_removal.sh@16 -- # bdevperf_rpc_pid=-1 00:14:33.416 11:38:03 -- target/device_removal.sh@18 -- # nvmftestinit 00:14:33.416 11:38:03 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:14:33.416 11:38:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.416 11:38:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:33.416 11:38:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:33.416 11:38:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:33.416 11:38:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.416 11:38:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.416 11:38:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.416 11:38:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:33.416 11:38:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:33.416 11:38:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:33.416 11:38:04 -- common/autotest_common.sh@10 -- # set +x 00:14:39.979 11:38:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:39.979 11:38:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:39.979 11:38:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:39.979 11:38:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:39.979 11:38:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:39.979 11:38:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:39.979 11:38:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:39.979 11:38:09 -- nvmf/common.sh@295 -- # net_devs=() 00:14:39.979 11:38:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:39.979 11:38:09 -- nvmf/common.sh@296 -- # e810=() 00:14:39.979 11:38:09 -- nvmf/common.sh@296 -- # local -ga e810 00:14:39.979 11:38:09 -- nvmf/common.sh@297 -- # x722=() 00:14:39.979 11:38:09 -- nvmf/common.sh@297 -- # local -ga x722 00:14:39.979 11:38:09 -- nvmf/common.sh@298 -- # mlx=() 00:14:39.979 11:38:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:39.979 11:38:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:39.979 11:38:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:39.979 11:38:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:39.979 11:38:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:39.979 11:38:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:39.979 11:38:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:39.979 11:38:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:39.979 11:38:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:39.979 11:38:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:39.979 11:38:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:39.979 11:38:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:39.979 11:38:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:39.979 11:38:09 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:39.979 11:38:09 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:39.979 11:38:09 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:39.979 11:38:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:39.979 11:38:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:39.979 11:38:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:39.979 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:39.979 11:38:09 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:39.979 11:38:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:39.979 11:38:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:39.979 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:39.979 11:38:09 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:39.979 11:38:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:39.979 11:38:09 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:39.979 11:38:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.979 11:38:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:39.979 11:38:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.979 11:38:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:39.979 Found net devices under 0000:18:00.0: mlx_0_0 00:14:39.979 11:38:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.979 11:38:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:39.979 11:38:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.979 11:38:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:39.979 11:38:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.979 11:38:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:39.979 Found net devices under 0000:18:00.1: mlx_0_1 00:14:39.979 11:38:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.979 11:38:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:39.979 11:38:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:39.979 11:38:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@409 -- # rdma_device_init 00:14:39.979 11:38:09 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:14:39.979 11:38:09 -- nvmf/common.sh@58 -- # uname 00:14:39.979 11:38:09 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:39.979 11:38:09 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:39.979 11:38:09 -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:39.979 11:38:09 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:39.979 11:38:09 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:39.979 11:38:09 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:39.979 11:38:09 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:39.979 11:38:09 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:39.979 11:38:09 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:14:39.979 11:38:09 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:39.979 11:38:09 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:39.979 11:38:09 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:39.979 11:38:09 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:39.979 11:38:09 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:39.979 11:38:09 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:39.979 11:38:09 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:39.979 11:38:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:39.979 11:38:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:39.979 11:38:09 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:39.979 11:38:09 -- nvmf/common.sh@105 -- # continue 2 00:14:39.979 11:38:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:39.979 11:38:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:39.979 11:38:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:39.979 11:38:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:39.979 11:38:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:39.980 11:38:09 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:39.980 11:38:09 -- nvmf/common.sh@105 -- # continue 2 00:14:39.980 11:38:09 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:39.980 11:38:09 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:39.980 11:38:09 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:39.980 11:38:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:39.980 11:38:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:39.980 11:38:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:39.980 11:38:09 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:39.980 11:38:09 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:39.980 11:38:09 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:39.980 28: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:39.980 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:14:39.980 altname enp24s0f0np0 00:14:39.980 altname ens785f0np0 00:14:39.980 inet 192.168.100.8/24 scope global mlx_0_0 00:14:39.980 valid_lft forever preferred_lft forever 00:14:39.980 11:38:09 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:39.980 11:38:09 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:39.980 11:38:09 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:39.980 11:38:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:39.980 11:38:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:39.980 11:38:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:39.980 11:38:09 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:39.980 11:38:09 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:39.980 11:38:09 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:39.980 29: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:39.980 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:14:39.980 altname enp24s0f1np1 00:14:39.980 altname ens785f1np1 00:14:39.980 inet 192.168.100.9/24 scope global mlx_0_1 00:14:39.980 valid_lft forever preferred_lft forever 00:14:39.980 11:38:09 -- nvmf/common.sh@411 -- # return 0 00:14:39.980 11:38:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:39.980 11:38:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:39.980 11:38:09 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:14:39.980 11:38:09 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:14:39.980 11:38:09 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:39.980 11:38:09 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:39.980 11:38:09 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:39.980 11:38:09 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:39.980 11:38:09 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:39.980 11:38:09 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:39.980 11:38:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:39.980 11:38:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:39.980 11:38:09 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:39.980 11:38:09 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:39.980 11:38:09 -- nvmf/common.sh@105 -- # continue 2 00:14:39.980 11:38:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:39.980 11:38:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:39.980 11:38:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:39.980 11:38:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:39.980 11:38:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:39.980 11:38:09 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:39.980 11:38:09 -- nvmf/common.sh@105 -- # continue 2 00:14:39.980 11:38:09 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:39.980 11:38:09 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:39.980 11:38:09 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:39.980 11:38:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:39.980 11:38:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:39.980 11:38:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:39.980 11:38:09 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:39.980 11:38:09 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:39.980 11:38:09 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:39.980 11:38:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:39.980 11:38:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:39.980 11:38:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:39.980 11:38:09 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:14:39.980 192.168.100.9' 00:14:39.980 11:38:09 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:39.980 192.168.100.9' 00:14:39.980 11:38:09 -- nvmf/common.sh@446 -- # head -n 1 00:14:39.980 11:38:09 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:39.980 11:38:09 -- nvmf/common.sh@447 -- # head -n 1 00:14:39.980 11:38:09 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:14:39.980 192.168.100.9' 00:14:39.980 11:38:09 -- nvmf/common.sh@447 -- # tail -n +2 00:14:39.980 11:38:09 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:39.980 11:38:09 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:14:39.980 11:38:09 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:39.980 11:38:09 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:14:39.980 11:38:09 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:14:39.980 11:38:09 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:14:39.980 11:38:09 -- target/device_removal.sh@235 -- # BOND_NAME=bond_nvmf 00:14:39.980 11:38:09 -- target/device_removal.sh@236 -- # BOND_IP=10.11.11.26 00:14:39.980 11:38:09 -- target/device_removal.sh@237 -- # BOND_MASK=24 00:14:39.980 11:38:09 -- target/device_removal.sh@311 -- # run_test nvmf_device_removal_pci_remove_no_srq test_remove_and_rescan --no-srq 00:14:39.980 11:38:09 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:39.980 11:38:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:39.980 11:38:09 -- common/autotest_common.sh@10 -- # set +x 00:14:39.980 ************************************ 00:14:39.980 START TEST nvmf_device_removal_pci_remove_no_srq 00:14:39.980 ************************************ 00:14:39.980 11:38:09 -- common/autotest_common.sh@1121 -- # test_remove_and_rescan --no-srq 00:14:39.980 11:38:09 -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:14:39.980 11:38:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:39.980 11:38:09 -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:39.980 11:38:09 -- common/autotest_common.sh@10 -- # set +x 00:14:39.980 11:38:09 -- nvmf/common.sh@470 -- # nvmfpid=3022732 00:14:39.980 11:38:09 -- nvmf/common.sh@471 -- # waitforlisten 3022732 00:14:39.980 11:38:09 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:39.980 11:38:09 -- common/autotest_common.sh@827 -- # '[' -z 3022732 ']' 00:14:39.980 11:38:09 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.980 11:38:09 -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:39.980 11:38:09 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.980 11:38:09 -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:39.980 11:38:09 -- common/autotest_common.sh@10 -- # set +x 00:14:39.980 [2024-05-15 11:38:09.801140] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:14:39.980 [2024-05-15 11:38:09.801201] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.980 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.980 [2024-05-15 11:38:09.875393] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:39.980 [2024-05-15 11:38:09.956517] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.980 [2024-05-15 11:38:09.956570] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.980 [2024-05-15 11:38:09.956580] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.980 [2024-05-15 11:38:09.956588] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.980 [2024-05-15 11:38:09.956594] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.980 [2024-05-15 11:38:09.956643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.980 [2024-05-15 11:38:09.956645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.980 11:38:10 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:39.980 11:38:10 -- common/autotest_common.sh@860 -- # return 0 00:14:39.980 11:38:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:39.980 11:38:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:39.980 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:39.980 11:38:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.980 11:38:10 -- target/device_removal.sh@130 -- # create_subsystem_and_connect --no-srq 00:14:39.980 11:38:10 -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:14:39.980 11:38:10 -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:14:39.980 11:38:10 -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 --no-srq 00:14:39.980 11:38:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.980 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:39.980 [2024-05-15 11:38:10.686583] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x212a930/0x212ee20) succeed. 00:14:39.980 [2024-05-15 11:38:10.695976] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x212be30/0x21704b0) succeed. 00:14:39.980 11:38:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.980 11:38:10 -- target/device_removal.sh@49 -- # get_rdma_if_list 00:14:39.980 11:38:10 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:39.980 11:38:10 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:39.980 11:38:10 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:39.980 11:38:10 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:39.980 11:38:10 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:39.980 11:38:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:39.980 11:38:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:39.980 11:38:10 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:39.980 11:38:10 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:39.980 11:38:10 -- nvmf/common.sh@105 -- # continue 2 00:14:39.980 11:38:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:39.980 11:38:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:39.980 11:38:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:39.980 11:38:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:39.980 11:38:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:39.980 11:38:10 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:39.980 11:38:10 -- nvmf/common.sh@105 -- # continue 2 00:14:39.980 11:38:10 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:14:39.980 11:38:10 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:14:40.239 11:38:10 -- target/device_removal.sh@25 -- # local -a dev_name 00:14:40.239 11:38:10 -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:14:40.239 11:38:10 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:14:40.239 11:38:10 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:14:40.239 11:38:10 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:14:40.239 11:38:10 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:14:40.239 11:38:10 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:14:40.239 11:38:10 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:40.239 11:38:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:40.239 11:38:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:40.239 11:38:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:40.240 11:38:10 -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:14:40.240 11:38:10 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:14:40.240 11:38:10 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:14:40.240 11:38:10 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:14:40.240 11:38:10 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:14:40.240 11:38:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.240 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:40.240 11:38:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.240 11:38:10 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:14:40.240 11:38:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.240 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:40.240 11:38:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.240 11:38:10 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:14:40.240 11:38:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.240 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:40.240 11:38:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.240 11:38:10 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:14:40.240 11:38:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.240 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:40.240 [2024-05-15 11:38:10.834124] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:40.240 [2024-05-15 11:38:10.834502] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:40.240 11:38:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.240 11:38:10 -- target/device_removal.sh@41 -- # return 0 00:14:40.240 11:38:10 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:14:40.240 11:38:10 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:14:40.240 11:38:10 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:14:40.240 11:38:10 -- target/device_removal.sh@25 -- # local -a dev_name 00:14:40.240 11:38:10 -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:14:40.240 11:38:10 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:14:40.240 11:38:10 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:14:40.240 11:38:10 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:14:40.240 11:38:10 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:14:40.240 11:38:10 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:14:40.240 11:38:10 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:40.240 11:38:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:40.240 11:38:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:40.240 11:38:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:40.240 11:38:10 -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:14:40.240 11:38:10 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:14:40.240 11:38:10 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:14:40.240 11:38:10 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:14:40.240 11:38:10 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:14:40.240 11:38:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.240 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:40.240 11:38:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.240 11:38:10 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:14:40.240 11:38:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.240 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:40.240 11:38:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.240 11:38:10 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:14:40.240 11:38:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.240 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:40.240 11:38:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.240 11:38:10 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:14:40.240 11:38:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.240 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:40.240 [2024-05-15 11:38:10.922685] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:14:40.240 11:38:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.240 11:38:10 -- target/device_removal.sh@41 -- # return 0 00:14:40.240 11:38:10 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:14:40.240 11:38:10 -- target/device_removal.sh@53 -- # return 0 00:14:40.240 11:38:10 -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:14:40.240 11:38:10 -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:14:40.240 11:38:10 -- target/device_removal.sh@87 -- # local dev_names 00:14:40.240 11:38:10 -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:40.240 11:38:10 -- target/device_removal.sh@91 -- # bdevperf_pid=3022859 00:14:40.240 11:38:10 -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:40.240 11:38:10 -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:14:40.240 11:38:10 -- target/device_removal.sh@94 -- # waitforlisten 3022859 /var/tmp/bdevperf.sock 00:14:40.240 11:38:10 -- common/autotest_common.sh@827 -- # '[' -z 3022859 ']' 00:14:40.240 11:38:10 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:40.240 11:38:10 -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:40.240 11:38:10 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:40.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:40.240 11:38:10 -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:40.240 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:41.176 11:38:11 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:41.176 11:38:11 -- common/autotest_common.sh@860 -- # return 0 00:14:41.176 11:38:11 -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:14:41.176 11:38:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.176 11:38:11 -- common/autotest_common.sh@10 -- # set +x 00:14:41.176 11:38:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.176 11:38:11 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:14:41.176 11:38:11 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:14:41.176 11:38:11 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:14:41.176 11:38:11 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:14:41.176 11:38:11 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:14:41.176 11:38:11 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:41.176 11:38:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:41.176 11:38:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:41.176 11:38:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:41.176 11:38:11 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:14:41.176 11:38:11 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:14:41.176 11:38:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.176 11:38:11 -- common/autotest_common.sh@10 -- # set +x 00:14:41.176 Nvme_mlx_0_0n1 00:14:41.176 11:38:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.176 11:38:11 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:14:41.176 11:38:11 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:14:41.176 11:38:11 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:14:41.435 11:38:11 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:14:41.435 11:38:11 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:14:41.435 11:38:11 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:41.435 11:38:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:41.435 11:38:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:41.435 11:38:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:41.435 11:38:11 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:14:41.435 11:38:11 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:14:41.435 11:38:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.435 11:38:11 -- common/autotest_common.sh@10 -- # set +x 00:14:41.435 Nvme_mlx_0_1n1 00:14:41.435 11:38:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.435 11:38:12 -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=3023027 00:14:41.435 11:38:12 -- target/device_removal.sh@112 -- # sleep 5 00:14:41.435 11:38:12 -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:14:46.704 11:38:17 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:14:46.704 11:38:17 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:14:46.704 11:38:17 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:14:46.704 11:38:17 -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:14:46.704 11:38:17 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:14:46.704 11:38:17 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:14:46.704 11:38:17 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:14:46.704 11:38:17 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/infiniband 00:14:46.705 11:38:17 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:14:46.705 11:38:17 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:14:46.705 11:38:17 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:46.705 11:38:17 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:46.705 11:38:17 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:46.705 11:38:17 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:46.705 11:38:17 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:14:46.705 11:38:17 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:14:46.705 11:38:17 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:14:46.705 11:38:17 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:14:46.705 11:38:17 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0 00:14:46.705 11:38:17 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:14:46.705 11:38:17 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:14:46.705 11:38:17 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:14:46.705 11:38:17 -- target/device_removal.sh@77 -- # grep mlx5_0 00:14:46.705 11:38:17 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:14:46.705 11:38:17 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.705 11:38:17 -- common/autotest_common.sh@10 -- # set +x 00:14:46.705 11:38:17 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.705 mlx5_0 00:14:46.705 11:38:17 -- target/device_removal.sh@78 -- # return 0 00:14:46.705 11:38:17 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:14:46.705 11:38:17 -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:14:46.705 11:38:17 -- target/device_removal.sh@67 -- # echo 1 00:14:46.705 11:38:17 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:14:46.705 11:38:17 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:14:46.705 11:38:17 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:14:46.705 [2024-05-15 11:38:17.199227] rdma.c:3563:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:14:46.705 [2024-05-15 11:38:17.199336] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:46.705 [2024-05-15 11:38:17.199448] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:46.705 [2024-05-15 11:38:17.199464] rdma.c: 845:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 95 00:14:46.705 [2024-05-15 11:38:17.199474] rdma.c: 632:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:14:46.705 [2024-05-15 11:38:17.199482] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199493] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199502] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199509] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199518] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.199525] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.199533] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.199541] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.199548] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.199556] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.199564] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.199572] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.199580] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199587] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199595] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.199602] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.199609] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199617] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199624] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.199632] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.199639] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.199646] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.199654] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199661] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199668] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199676] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199683] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199691] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199698] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199710] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199718] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199725] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199733] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199740] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199748] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.199755] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.199762] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199770] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199777] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.199784] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.199791] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.199799] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.199808] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199818] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199827] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199834] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199842] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.199849] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.199857] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199864] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199871] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.199878] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.199885] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199892] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199900] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.199907] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.199914] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199923] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199931] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.199938] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.199945] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199954] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199961] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.199969] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.199976] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199984] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.199991] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.199999] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.200006] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.200013] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.200021] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.200028] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.200035] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.200043] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.200050] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.200064] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.200072] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.200080] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.200087] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.200094] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.200102] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.200109] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.200116] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.200123] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.200131] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.200141] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.200149] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.200156] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.200164] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.200171] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.705 [2024-05-15 11:38:17.200178] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.705 [2024-05-15 11:38:17.200185] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.705 [2024-05-15 11:38:17.200192] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.705 [2024-05-15 11:38:17.200201] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200209] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200216] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200224] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200231] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200239] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200247] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200255] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200262] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200270] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200278] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200286] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.706 [2024-05-15 11:38:17.200293] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.706 [2024-05-15 11:38:17.200300] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200307] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200314] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200322] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200330] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200338] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200346] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.706 [2024-05-15 11:38:17.200353] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.706 [2024-05-15 11:38:17.200361] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200369] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200376] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.706 [2024-05-15 11:38:17.200385] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.706 [2024-05-15 11:38:17.200393] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200400] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200408] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200415] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200422] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.706 [2024-05-15 11:38:17.200430] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.706 [2024-05-15 11:38:17.200437] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200446] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200454] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.706 [2024-05-15 11:38:17.200463] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.706 [2024-05-15 11:38:17.200471] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200478] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200485] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.706 [2024-05-15 11:38:17.200493] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.706 [2024-05-15 11:38:17.200500] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.706 [2024-05-15 11:38:17.200508] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.706 [2024-05-15 11:38:17.200516] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200524] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200532] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.706 [2024-05-15 11:38:17.200540] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.706 [2024-05-15 11:38:17.200547] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200554] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200562] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200569] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200577] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200585] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200592] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200599] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200608] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200615] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200623] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.706 [2024-05-15 11:38:17.200630] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.706 [2024-05-15 11:38:17.200638] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200646] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200654] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200661] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200668] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.706 [2024-05-15 11:38:17.200675] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.706 [2024-05-15 11:38:17.200682] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200692] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200700] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200708] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200716] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.706 [2024-05-15 11:38:17.200723] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.706 [2024-05-15 11:38:17.200731] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200738] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200747] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.706 [2024-05-15 11:38:17.200755] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.706 [2024-05-15 11:38:17.200762] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200769] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200776] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200785] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200793] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200801] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200809] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.706 [2024-05-15 11:38:17.200816] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.706 [2024-05-15 11:38:17.200824] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.706 [2024-05-15 11:38:17.200831] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.706 [2024-05-15 11:38:17.200838] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200845] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200852] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:46.706 [2024-05-15 11:38:17.200860] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:46.706 [2024-05-15 11:38:17.200868] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200876] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200883] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200891] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200898] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200905] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200912] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200919] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:46.706 [2024-05-15 11:38:17.200927] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:46.706 [2024-05-15 11:38:17.200936] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:53.263 11:38:22 -- target/device_removal.sh@147 -- # seq 1 10 00:14:53.263 11:38:22 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:14:53.263 11:38:22 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:14:53.263 11:38:22 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:14:53.263 11:38:22 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:14:53.263 11:38:22 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:14:53.263 11:38:22 -- target/device_removal.sh@77 -- # grep mlx5_0 00:14:53.263 11:38:22 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.263 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.263 11:38:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.263 11:38:22 -- target/device_removal.sh@78 -- # return 1 00:14:53.263 11:38:22 -- target/device_removal.sh@149 -- # break 00:14:53.263 11:38:22 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:14:53.263 11:38:22 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:14:53.263 11:38:22 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:14:53.263 11:38:22 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:14:53.263 11:38:22 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.263 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.263 11:38:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.263 11:38:22 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:14:53.263 11:38:22 -- target/device_removal.sh@160 -- # rescan_pci 00:14:53.263 11:38:22 -- target/device_removal.sh@57 -- # echo 1 00:14:53.263 [2024-05-15 11:38:23.762893] rdma.c:3252:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x2367be0, err 11. Skip rescan. 00:14:53.263 [2024-05-15 11:38:23.768621] rdma.c:3252:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x2367be0, err 11. Skip rescan. 00:14:53.263 11:38:23 -- target/device_removal.sh@162 -- # seq 1 10 00:14:53.263 11:38:23 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:14:53.263 11:38:23 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/net 00:14:53.263 11:38:23 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:14:53.263 11:38:23 -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:14:53.263 11:38:23 -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:14:53.263 11:38:23 -- target/device_removal.sh@171 -- # break 00:14:53.263 11:38:23 -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:14:53.263 11:38:23 -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:14:53.522 [2024-05-15 11:38:24.125073] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x212a9c0/0x212ee20) succeed. 00:14:53.522 [2024-05-15 11:38:24.125140] rdma.c:3305:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:14:56.807 11:38:27 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:14:56.807 11:38:27 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:56.807 11:38:27 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:56.807 11:38:27 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:56.807 11:38:27 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:56.807 11:38:27 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:14:56.807 11:38:27 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:14:56.807 11:38:27 -- target/device_removal.sh@186 -- # seq 1 10 00:14:56.807 11:38:27 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:14:56.807 11:38:27 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:14:56.807 11:38:27 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:14:56.807 11:38:27 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:14:56.807 11:38:27 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:14:56.807 11:38:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.807 11:38:27 -- common/autotest_common.sh@10 -- # set +x 00:14:56.807 [2024-05-15 11:38:27.161167] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:56.807 [2024-05-15 11:38:27.161204] rdma.c:3311:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:14:56.807 [2024-05-15 11:38:27.161221] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:14:56.807 [2024-05-15 11:38:27.161236] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:14:56.807 11:38:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.807 11:38:27 -- target/device_removal.sh@187 -- # ib_count=2 00:14:56.807 11:38:27 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:14:56.807 11:38:27 -- target/device_removal.sh@189 -- # break 00:14:56.807 11:38:27 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:14:56.807 11:38:27 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:14:56.807 11:38:27 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:14:56.807 11:38:27 -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:14:56.807 11:38:27 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:14:56.807 11:38:27 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:14:56.807 11:38:27 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:14:56.807 11:38:27 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/infiniband 00:14:56.807 11:38:27 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:14:56.807 11:38:27 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:14:56.807 11:38:27 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:56.807 11:38:27 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:56.807 11:38:27 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:56.807 11:38:27 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:56.807 11:38:27 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:14:56.807 11:38:27 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:14:56.807 11:38:27 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:14:56.807 11:38:27 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:14:56.807 11:38:27 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1 00:14:56.807 11:38:27 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:14:56.807 11:38:27 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:14:56.807 11:38:27 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:14:56.807 11:38:27 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:14:56.807 11:38:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.807 11:38:27 -- common/autotest_common.sh@10 -- # set +x 00:14:56.807 11:38:27 -- target/device_removal.sh@77 -- # grep mlx5_1 00:14:56.807 11:38:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.807 mlx5_1 00:14:56.807 11:38:27 -- target/device_removal.sh@78 -- # return 0 00:14:56.807 11:38:27 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:14:56.807 11:38:27 -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:14:56.807 11:38:27 -- target/device_removal.sh@67 -- # echo 1 00:14:56.807 11:38:27 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:14:56.807 11:38:27 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:14:56.807 11:38:27 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:14:56.807 [2024-05-15 11:38:27.345440] rdma.c:3563:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:14:56.807 [2024-05-15 11:38:27.345516] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:56.807 [2024-05-15 11:38:27.353657] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:56.807 [2024-05-15 11:38:27.353677] rdma.c: 845:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 64 00:14:56.807 [2024-05-15 11:38:27.353686] rdma.c: 632:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:14:56.807 [2024-05-15 11:38:27.353695] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353703] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353710] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353717] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353726] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353734] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353741] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353748] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353756] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353764] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353772] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353780] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353788] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353797] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353805] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353812] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353820] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353828] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353835] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353842] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353849] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353856] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353863] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353870] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353878] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353885] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353892] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353903] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353910] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353918] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353925] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353934] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353942] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353949] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353957] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353965] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353973] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353980] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.353988] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.353996] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.354004] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.354011] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.354019] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.354025] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.354032] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.354040] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.354047] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.354054] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.354067] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.354074] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.354083] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.354091] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.354099] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.807 [2024-05-15 11:38:27.354107] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.807 [2024-05-15 11:38:27.354116] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354126] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354135] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354144] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354154] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354162] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354169] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354177] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354184] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354191] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354199] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354207] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354215] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354223] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354230] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354239] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354246] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354254] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354261] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354268] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354275] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354283] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354290] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354297] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354304] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354312] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354319] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354326] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354334] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354342] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354349] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354357] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354364] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354371] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354379] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354386] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354394] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354401] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354409] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354416] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354423] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354430] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354438] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354445] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354453] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354461] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354468] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354475] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354482] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354490] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354497] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354504] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354511] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354517] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354525] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354533] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354540] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354548] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354555] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354562] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354570] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354577] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354584] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354592] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354599] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354606] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354614] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354621] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354628] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354635] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354642] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354650] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:56.808 [2024-05-15 11:38:27.354658] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:56.808 [2024-05-15 11:38:27.354665] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:03.465 11:38:33 -- target/device_removal.sh@147 -- # seq 1 10 00:15:03.465 11:38:33 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:15:03.465 11:38:33 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:15:03.465 11:38:33 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:15:03.465 11:38:33 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:15:03.465 11:38:33 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:15:03.465 11:38:33 -- target/device_removal.sh@77 -- # grep mlx5_1 00:15:03.465 11:38:33 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.465 11:38:33 -- common/autotest_common.sh@10 -- # set +x 00:15:03.466 11:38:33 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.466 11:38:33 -- target/device_removal.sh@78 -- # return 1 00:15:03.466 11:38:33 -- target/device_removal.sh@149 -- # break 00:15:03.466 11:38:33 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:15:03.466 11:38:33 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:15:03.466 11:38:33 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:15:03.466 11:38:33 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:15:03.466 11:38:33 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.466 11:38:33 -- common/autotest_common.sh@10 -- # set +x 00:15:03.466 11:38:33 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.466 11:38:33 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:15:03.466 11:38:33 -- target/device_removal.sh@160 -- # rescan_pci 00:15:03.466 11:38:33 -- target/device_removal.sh@57 -- # echo 1 00:15:03.466 [2024-05-15 11:38:34.074582] rdma.c:3252:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x2368e80, err 11. Skip rescan. 00:15:03.466 11:38:34 -- target/device_removal.sh@162 -- # seq 1 10 00:15:03.466 11:38:34 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:15:03.466 11:38:34 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/net 00:15:03.466 11:38:34 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:15:03.466 11:38:34 -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:15:03.466 11:38:34 -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:15:03.466 11:38:34 -- target/device_removal.sh@171 -- # break 00:15:03.466 11:38:34 -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:15:03.466 11:38:34 -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:15:03.723 [2024-05-15 11:38:34.433933] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2335a00/0x21704b0) succeed. 00:15:03.723 [2024-05-15 11:38:34.434011] rdma.c:3305:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:15:07.005 11:38:37 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:15:07.005 11:38:37 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:07.005 11:38:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:07.005 11:38:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:07.005 11:38:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:07.005 11:38:37 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:15:07.005 11:38:37 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:15:07.005 11:38:37 -- target/device_removal.sh@186 -- # seq 1 10 00:15:07.005 11:38:37 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:15:07.005 11:38:37 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:15:07.005 11:38:37 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:15:07.005 11:38:37 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:15:07.005 11:38:37 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:15:07.005 11:38:37 -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.005 11:38:37 -- common/autotest_common.sh@10 -- # set +x 00:15:07.005 [2024-05-15 11:38:37.541047] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:15:07.005 [2024-05-15 11:38:37.541090] rdma.c:3311:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:15:07.005 [2024-05-15 11:38:37.541110] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:15:07.005 [2024-05-15 11:38:37.541127] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:15:07.005 11:38:37 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.005 11:38:37 -- target/device_removal.sh@187 -- # ib_count=2 00:15:07.005 11:38:37 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:15:07.005 11:38:37 -- target/device_removal.sh@189 -- # break 00:15:07.005 11:38:37 -- target/device_removal.sh@200 -- # stop_bdevperf 00:15:07.005 11:38:37 -- target/device_removal.sh@116 -- # wait 3023027 00:16:14.714 0 00:16:14.714 11:39:42 -- target/device_removal.sh@118 -- # killprocess 3022859 00:16:14.714 11:39:42 -- common/autotest_common.sh@946 -- # '[' -z 3022859 ']' 00:16:14.714 11:39:42 -- common/autotest_common.sh@950 -- # kill -0 3022859 00:16:14.714 11:39:42 -- common/autotest_common.sh@951 -- # uname 00:16:14.714 11:39:42 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:14.714 11:39:42 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3022859 00:16:14.714 11:39:42 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:14.714 11:39:42 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:14.714 11:39:42 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3022859' 00:16:14.714 killing process with pid 3022859 00:16:14.714 11:39:42 -- common/autotest_common.sh@965 -- # kill 3022859 00:16:14.714 11:39:42 -- common/autotest_common.sh@970 -- # wait 3022859 00:16:14.714 11:39:42 -- target/device_removal.sh@119 -- # bdevperf_pid= 00:16:14.714 11:39:42 -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:16:14.714 [2024-05-15 11:38:10.982861] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:16:14.714 [2024-05-15 11:38:10.982934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3022859 ] 00:16:14.714 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.714 [2024-05-15 11:38:11.054278] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.714 [2024-05-15 11:38:11.137815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.714 Running I/O for 90 seconds... 00:16:14.714 [2024-05-15 11:38:17.201081] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:14.714 [2024-05-15 11:38:17.201117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.714 [2024-05-15 11:38:17.201129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32656 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:16:14.714 [2024-05-15 11:38:17.201141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.714 [2024-05-15 11:38:17.201151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32656 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:16:14.714 [2024-05-15 11:38:17.201162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.714 [2024-05-15 11:38:17.201172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32656 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:16:14.714 [2024-05-15 11:38:17.201182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.714 [2024-05-15 11:38:17.201191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32656 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:16:14.714 [2024-05-15 11:38:17.203203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:14.714 [2024-05-15 11:38:17.203219] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:14.714 [2024-05-15 11:38:17.203258] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:14.714 [2024-05-15 11:38:17.211072] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.714 [2024-05-15 11:38:17.221346] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.714 [2024-05-15 11:38:17.231374] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.714 [2024-05-15 11:38:17.241681] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.714 [2024-05-15 11:38:17.251826] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.714 [2024-05-15 11:38:17.262004] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.714 [2024-05-15 11:38:17.274961] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.714 [2024-05-15 11:38:17.286035] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.714 [2024-05-15 11:38:17.296065] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.714 [2024-05-15 11:38:17.306708] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.714 [2024-05-15 11:38:17.316872] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.327211] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.337236] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.347703] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.357730] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.367756] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.377781] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.387997] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.398022] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.408047] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.418095] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.428298] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.438628] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.448740] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.458754] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.469136] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.479161] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.489188] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.499215] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.509371] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.519397] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.529455] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.539629] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.549655] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.559743] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.569806] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.579858] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.589884] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.600110] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.610135] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.620161] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.630299] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.640327] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.650386] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.660411] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.670438] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.680462] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.690627] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.700654] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.710839] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.721153] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.731303] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.741342] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.751415] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.761505] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.771598] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.781712] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.791809] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.801861] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.812002] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.822019] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.832047] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.842344] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.852372] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.862421] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.872447] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.882474] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.892502] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.902596] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.912862] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.922900] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.932926] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.942970] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.952997] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.963023] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.973048] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.983074] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:17.993101] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.003125] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.013151] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.023179] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.033204] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.043228] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.053254] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.063280] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.073306] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.083331] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.093357] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.103381] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.113409] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.123434] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.133460] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.143485] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.153695] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.163798] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.174096] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.184121] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.194264] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.204289] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.715 [2024-05-15 11:38:18.206060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:205552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.715 [2024-05-15 11:38:18.206079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.715 [2024-05-15 11:38:18.206105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:205560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.715 [2024-05-15 11:38:18.206115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.715 [2024-05-15 11:38:18.206127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:205568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:205576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:205584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:205592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:205600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:205608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:205616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:205624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:205632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:205640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:205648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:205656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:205664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:205672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:205680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:205688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:205696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:205704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:205712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:205720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:205728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:205736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:205744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:205752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:205760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:205768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:205776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:205784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:205792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:205800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:205808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:205816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.716 [2024-05-15 11:38:18.206785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:204800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007700000 len:0x1000 key:0x180400 00:16:14.716 [2024-05-15 11:38:18.206809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:204808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007702000 len:0x1000 key:0x180400 00:16:14.716 [2024-05-15 11:38:18.206830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:204816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007704000 len:0x1000 key:0x180400 00:16:14.716 [2024-05-15 11:38:18.206851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:204824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007706000 len:0x1000 key:0x180400 00:16:14.716 [2024-05-15 11:38:18.206874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:204832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007708000 len:0x1000 key:0x180400 00:16:14.716 [2024-05-15 11:38:18.206895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:204840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770a000 len:0x1000 key:0x180400 00:16:14.716 [2024-05-15 11:38:18.206916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:204848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770c000 len:0x1000 key:0x180400 00:16:14.716 [2024-05-15 11:38:18.206937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.716 [2024-05-15 11:38:18.206948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:204856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770e000 len:0x1000 key:0x180400 00:16:14.716 [2024-05-15 11:38:18.206957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.206968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:204864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007710000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.206979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.206990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:204872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007712000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.206999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:204880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007714000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:204888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007716000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:204896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007718000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:204904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771a000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:204912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771c000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:204920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771e000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:204928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:204936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:204944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:204952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:204960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:204968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:204976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:204984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:204992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007730000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:205000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007732000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:205008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:205016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:205024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:205032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:205040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:205048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:205056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:205064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:205072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:205080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:205088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:205096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:205104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:205112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:205120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:205128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:205136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:205144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:205152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x180400 00:16:14.717 [2024-05-15 11:38:18.207742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.717 [2024-05-15 11:38:18.207754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:205160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.207764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.207776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:205168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.207785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.207796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:205176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.207806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.207819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:205184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.207828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.207839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:205192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.207848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.207859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:205200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.207870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.207882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:205208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.207891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.207902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:205216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.207911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.207923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:205224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.207933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.207944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:205232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.207953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.207964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:205240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.207973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.207985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:205248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.207995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:205256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:205264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:205272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:205280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:205288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:205296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:205304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:205312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:205320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:205328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:205336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:205344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:205352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:205360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:205368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:205376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:205384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:205392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x180400 00:16:14.718 [2024-05-15 11:38:18.208394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.718 [2024-05-15 11:38:18.208405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:205400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.208426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:205408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.208447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:205416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.208467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:205424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.208488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:205432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.208509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:205440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.208529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:205448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.208551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:205456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.208573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:205464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.208593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:205472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.208615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:205480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.208636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:205488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.208656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:205496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.208678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:205504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b0000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.208699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:205512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b2000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.208719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:205520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b4000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.208739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:205528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b6000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.208761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:205536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b8000 len:0x1000 key:0x180400 00:16:14.719 [2024-05-15 11:38:18.208772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.221817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:14.719 [2024-05-15 11:38:18.221833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:14.719 [2024-05-15 11:38:18.221843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:205544 len:8 PRP1 0x0 PRP2 0x0 00:16:14.719 [2024-05-15 11:38:18.221853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.719 [2024-05-15 11:38:18.224257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:14.719 [2024-05-15 11:38:18.224550] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:14.719 [2024-05-15 11:38:18.224569] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:14.719 [2024-05-15 11:38:18.224578] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:16:14.719 [2024-05-15 11:38:18.224600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:14.719 [2024-05-15 11:38:18.224611] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:14.719 [2024-05-15 11:38:18.225077] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:14.719 [2024-05-15 11:38:18.225094] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:14.719 [2024-05-15 11:38:18.225105] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:14.719 [2024-05-15 11:38:18.225131] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:14.719 [2024-05-15 11:38:18.225142] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:14.719 [2024-05-15 11:38:19.227829] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:14.719 [2024-05-15 11:38:19.227871] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:14.719 [2024-05-15 11:38:19.227881] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:16:14.719 [2024-05-15 11:38:19.227903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:14.719 [2024-05-15 11:38:19.227913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:14.719 [2024-05-15 11:38:19.227926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:14.719 [2024-05-15 11:38:19.227936] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:14.719 [2024-05-15 11:38:19.227946] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:14.719 [2024-05-15 11:38:19.227974] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:14.719 [2024-05-15 11:38:19.227984] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:14.719 [2024-05-15 11:38:20.231147] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:14.719 [2024-05-15 11:38:20.231187] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:14.719 [2024-05-15 11:38:20.231197] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:16:14.719 [2024-05-15 11:38:20.231223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:14.719 [2024-05-15 11:38:20.231235] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:14.719 [2024-05-15 11:38:20.231248] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:14.719 [2024-05-15 11:38:20.231258] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:14.719 [2024-05-15 11:38:20.231268] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:14.719 [2024-05-15 11:38:20.231293] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:14.719 [2024-05-15 11:38:20.231304] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:14.719 [2024-05-15 11:38:22.236357] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:14.719 [2024-05-15 11:38:22.236401] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:16:14.719 [2024-05-15 11:38:22.236428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:14.719 [2024-05-15 11:38:22.236440] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:14.719 [2024-05-15 11:38:22.236462] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:14.719 [2024-05-15 11:38:22.236471] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:14.719 [2024-05-15 11:38:22.236483] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:14.719 [2024-05-15 11:38:22.236516] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:14.719 [2024-05-15 11:38:22.236528] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:14.719 [2024-05-15 11:38:24.241453] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:14.719 [2024-05-15 11:38:24.241486] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:16:14.720 [2024-05-15 11:38:24.241514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:14.720 [2024-05-15 11:38:24.241525] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:14.720 [2024-05-15 11:38:24.241538] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:14.720 [2024-05-15 11:38:24.241548] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:14.720 [2024-05-15 11:38:24.241559] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:14.720 [2024-05-15 11:38:24.241584] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:14.720 [2024-05-15 11:38:24.241595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:14.720 [2024-05-15 11:38:26.246521] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:14.720 [2024-05-15 11:38:26.246554] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:16:14.720 [2024-05-15 11:38:26.246579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:14.720 [2024-05-15 11:38:26.246590] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:14.720 [2024-05-15 11:38:26.246603] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:14.720 [2024-05-15 11:38:26.246618] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:14.720 [2024-05-15 11:38:26.246628] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:14.720 [2024-05-15 11:38:26.246656] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:14.720 [2024-05-15 11:38:26.246667] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:14.720 [2024-05-15 11:38:27.299800] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:14.720 [2024-05-15 11:38:27.350583] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:14.720 [2024-05-15 11:38:27.350612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.720 [2024-05-15 11:38:27.350624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32656 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:16:14.720 [2024-05-15 11:38:27.350635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.720 [2024-05-15 11:38:27.350645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32656 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:16:14.720 [2024-05-15 11:38:27.350656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.720 [2024-05-15 11:38:27.350665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32656 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:16:14.720 [2024-05-15 11:38:27.350675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.720 [2024-05-15 11:38:27.350685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32656 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:16:14.720 [2024-05-15 11:38:27.352551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:14.720 [2024-05-15 11:38:27.352567] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:14.720 [2024-05-15 11:38:27.352598] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:14.720 [2024-05-15 11:38:27.360591] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.370617] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.380643] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.390667] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.400691] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.410718] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.420746] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.430774] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.440798] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.450825] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.460851] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.470878] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.480906] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.490930] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.500958] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.510983] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.521011] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.531039] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.541067] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.551092] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.561118] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.571144] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.581171] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.591197] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.601222] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.611250] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.621277] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.631302] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.641330] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.651357] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.661382] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.671409] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.681433] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.691460] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.701485] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.711511] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.721537] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.731563] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.741590] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.751617] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.761642] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.771668] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.781694] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.791719] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.801745] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.811772] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.821799] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.831824] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.841852] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.851877] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.861905] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.871930] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.881956] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.891981] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.902006] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.912032] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.922064] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.932090] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.942117] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.952144] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.962171] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.972196] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.982222] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.720 [2024-05-15 11:38:27.992249] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.002275] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.012302] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.022329] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.032356] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.042381] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.052406] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.062431] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.072458] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.082485] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.092512] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.102540] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.112566] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.122593] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.132620] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.142647] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.152674] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.162701] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.172727] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.182752] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.192780] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.202805] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.212829] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.222855] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.232883] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.242909] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.252934] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.263497] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.273524] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.285545] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.295571] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.306677] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.317526] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.327735] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.339243] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.349269] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:14.721 [2024-05-15 11:38:28.355223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793c000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793a000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007938000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007936000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007934000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007932000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007930000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792e000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792c000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792a000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007928000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007926000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007924000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007922000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007920000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791e000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791c000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791a000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007918000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007916000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007914000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007912000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007910000 len:0x1000 key:0x1be800 00:16:14.721 [2024-05-15 11:38:28.355728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.721 [2024-05-15 11:38:28.355741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790e000 len:0x1000 key:0x1be800 00:16:14.722 [2024-05-15 11:38:28.355750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.355761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790c000 len:0x1000 key:0x1be800 00:16:14.722 [2024-05-15 11:38:28.355772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.355783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790a000 len:0x1000 key:0x1be800 00:16:14.722 [2024-05-15 11:38:28.355792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.355803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007908000 len:0x1000 key:0x1be800 00:16:14.722 [2024-05-15 11:38:28.355813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.355824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007906000 len:0x1000 key:0x1be800 00:16:14.722 [2024-05-15 11:38:28.355833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.355846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x1be800 00:16:14.722 [2024-05-15 11:38:28.355855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.355866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x1be800 00:16:14.722 [2024-05-15 11:38:28.355875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.355886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x1be800 00:16:14.722 [2024-05-15 11:38:28.355896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.355908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.355918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.355928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.355938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.355950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.355960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.355972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.355982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.355993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.722 [2024-05-15 11:38:28.356563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.722 [2024-05-15 11:38:28.356574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.356979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.356991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.357003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.357014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.357023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.357034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.357043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.357054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.357070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.357081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.357091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.357102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.357112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.357123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.357132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.357144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.357153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.357164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.357174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.357184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.357193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.357204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.357214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.357225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.357235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.357246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.357257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.723 [2024-05-15 11:38:28.357268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.723 [2024-05-15 11:38:28.357278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.357888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.724 [2024-05-15 11:38:28.357898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32656 cdw0:7b9c32d0 sqhd:7530 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.370901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:14.724 [2024-05-15 11:38:28.370916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:14.724 [2024-05-15 11:38:28.370926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39680 len:8 PRP1 0x0 PRP2 0x0 00:16:14.724 [2024-05-15 11:38:28.370936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.724 [2024-05-15 11:38:28.370989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:14.724 [2024-05-15 11:38:28.373312] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:14.724 [2024-05-15 11:38:28.373334] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:14.724 [2024-05-15 11:38:28.373342] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:16:14.724 [2024-05-15 11:38:28.373361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:14.724 [2024-05-15 11:38:28.373372] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:14.724 [2024-05-15 11:38:28.373399] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:14.724 [2024-05-15 11:38:28.373409] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:14.724 [2024-05-15 11:38:28.373422] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:14.724 [2024-05-15 11:38:28.373447] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:14.724 [2024-05-15 11:38:28.373457] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:14.724 [2024-05-15 11:38:29.378574] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:14.724 [2024-05-15 11:38:29.378617] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:14.724 [2024-05-15 11:38:29.378626] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:16:14.724 [2024-05-15 11:38:29.378650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:14.724 [2024-05-15 11:38:29.378660] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:14.724 [2024-05-15 11:38:29.378684] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:14.724 [2024-05-15 11:38:29.378694] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:14.724 [2024-05-15 11:38:29.378705] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:14.724 [2024-05-15 11:38:29.378731] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:14.725 [2024-05-15 11:38:29.378742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:14.725 [2024-05-15 11:38:30.383806] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:14.725 [2024-05-15 11:38:30.383851] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:14.725 [2024-05-15 11:38:30.383860] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:16:14.725 [2024-05-15 11:38:30.383883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:14.725 [2024-05-15 11:38:30.383895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:14.725 [2024-05-15 11:38:30.384142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:14.725 [2024-05-15 11:38:30.384156] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:14.725 [2024-05-15 11:38:30.384168] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:14.725 [2024-05-15 11:38:30.384197] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:14.725 [2024-05-15 11:38:30.384207] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:14.725 [2024-05-15 11:38:32.389339] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:14.725 [2024-05-15 11:38:32.389384] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:16:14.725 [2024-05-15 11:38:32.389414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:14.725 [2024-05-15 11:38:32.389425] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:14.725 [2024-05-15 11:38:32.389438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:14.725 [2024-05-15 11:38:32.389448] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:14.725 [2024-05-15 11:38:32.389458] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:14.725 [2024-05-15 11:38:32.389491] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:14.725 [2024-05-15 11:38:32.389502] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:14.725 [2024-05-15 11:38:34.395916] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:14.725 [2024-05-15 11:38:34.395957] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:16:14.725 [2024-05-15 11:38:34.395986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:14.725 [2024-05-15 11:38:34.395997] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:14.725 [2024-05-15 11:38:34.396010] bdev_nvme.c:2873:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Reset is already in progress. Defer failover until reset completes. 00:16:14.725 [2024-05-15 11:38:34.396031] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:14.725 [2024-05-15 11:38:34.396042] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:14.725 [2024-05-15 11:38:34.396052] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:14.725 [2024-05-15 11:38:34.396093] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:14.725 [2024-05-15 11:38:34.396118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:14.725 [2024-05-15 11:38:35.400565] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:14.725 [2024-05-15 11:38:35.400611] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:16:14.725 [2024-05-15 11:38:35.400640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:14.725 [2024-05-15 11:38:35.400650] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:14.725 [2024-05-15 11:38:35.400677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:14.725 [2024-05-15 11:38:35.400688] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:14.725 [2024-05-15 11:38:35.400698] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:14.725 [2024-05-15 11:38:35.400741] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:14.725 [2024-05-15 11:38:35.400752] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:14.725 [2024-05-15 11:38:37.408435] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:14.725 [2024-05-15 11:38:37.408477] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:16:14.725 [2024-05-15 11:38:37.408506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:14.725 [2024-05-15 11:38:37.408517] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:14.725 [2024-05-15 11:38:37.408543] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:14.725 [2024-05-15 11:38:37.408555] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:14.725 [2024-05-15 11:38:37.408566] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:14.725 [2024-05-15 11:38:37.408605] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:14.725 [2024-05-15 11:38:37.408616] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:14.725 [2024-05-15 11:38:38.466596] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:14.725 00:16:14.725 Latency(us) 00:16:14.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.725 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:14.725 Verification LBA range: start 0x0 length 0x8000 00:16:14.725 Nvme_mlx_0_0n1 : 90.01 11091.97 43.33 0.00 0.00 11522.08 1966.08 11087551.44 00:16:14.725 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:14.725 Verification LBA range: start 0x0 length 0x8000 00:16:14.725 Nvme_mlx_0_1n1 : 90.01 9557.72 37.33 0.00 0.00 13373.31 2393.49 12079595.52 00:16:14.725 =================================================================================================================== 00:16:14.725 Total : 20649.69 80.66 0.00 0.00 12378.92 1966.08 12079595.52 00:16:14.725 Received shutdown signal, test time was about 90.000000 seconds 00:16:14.725 00:16:14.725 Latency(us) 00:16:14.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.725 =================================================================================================================== 00:16:14.725 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:14.725 11:39:42 -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:16:14.725 11:39:42 -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:16:14.725 11:39:42 -- target/device_removal.sh@202 -- # killprocess 3022732 00:16:14.725 11:39:42 -- common/autotest_common.sh@946 -- # '[' -z 3022732 ']' 00:16:14.725 11:39:42 -- common/autotest_common.sh@950 -- # kill -0 3022732 00:16:14.725 11:39:42 -- common/autotest_common.sh@951 -- # uname 00:16:14.725 11:39:42 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:14.725 11:39:42 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3022732 00:16:14.725 11:39:42 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:14.725 11:39:42 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:14.725 11:39:42 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3022732' 00:16:14.725 killing process with pid 3022732 00:16:14.725 11:39:42 -- common/autotest_common.sh@965 -- # kill 3022732 00:16:14.725 [2024-05-15 11:39:42.742355] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:14.725 11:39:42 -- common/autotest_common.sh@970 -- # wait 3022732 00:16:14.725 [2024-05-15 11:39:42.770441] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:16:14.725 11:39:43 -- target/device_removal.sh@203 -- # nvmfpid= 00:16:14.725 11:39:43 -- target/device_removal.sh@205 -- # return 0 00:16:14.725 00:16:14.725 real 1m33.346s 00:16:14.725 user 4m25.496s 00:16:14.725 sys 0m4.633s 00:16:14.725 11:39:43 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:14.725 11:39:43 -- common/autotest_common.sh@10 -- # set +x 00:16:14.725 ************************************ 00:16:14.725 END TEST nvmf_device_removal_pci_remove_no_srq 00:16:14.725 ************************************ 00:16:14.725 11:39:43 -- target/device_removal.sh@312 -- # run_test nvmf_device_removal_pci_remove test_remove_and_rescan 00:16:14.725 11:39:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:14.725 11:39:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:14.725 11:39:43 -- common/autotest_common.sh@10 -- # set +x 00:16:14.725 ************************************ 00:16:14.725 START TEST nvmf_device_removal_pci_remove 00:16:14.725 ************************************ 00:16:14.725 11:39:43 -- common/autotest_common.sh@1121 -- # test_remove_and_rescan 00:16:14.725 11:39:43 -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:16:14.725 11:39:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:14.725 11:39:43 -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:14.725 11:39:43 -- common/autotest_common.sh@10 -- # set +x 00:16:14.725 11:39:43 -- nvmf/common.sh@470 -- # nvmfpid=3035691 00:16:14.725 11:39:43 -- nvmf/common.sh@471 -- # waitforlisten 3035691 00:16:14.725 11:39:43 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:14.725 11:39:43 -- common/autotest_common.sh@827 -- # '[' -z 3035691 ']' 00:16:14.725 11:39:43 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.725 11:39:43 -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:14.725 11:39:43 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.725 11:39:43 -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:14.725 11:39:43 -- common/autotest_common.sh@10 -- # set +x 00:16:14.725 [2024-05-15 11:39:43.236886] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:16:14.725 [2024-05-15 11:39:43.236944] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.725 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.726 [2024-05-15 11:39:43.310267] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:14.726 [2024-05-15 11:39:43.393567] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.726 [2024-05-15 11:39:43.393613] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.726 [2024-05-15 11:39:43.393622] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.726 [2024-05-15 11:39:43.393630] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.726 [2024-05-15 11:39:43.393638] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.726 [2024-05-15 11:39:43.393696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.726 [2024-05-15 11:39:43.393698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.726 11:39:44 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:14.726 11:39:44 -- common/autotest_common.sh@860 -- # return 0 00:16:14.726 11:39:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:14.726 11:39:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:14.726 11:39:44 -- common/autotest_common.sh@10 -- # set +x 00:16:14.726 11:39:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.726 11:39:44 -- target/device_removal.sh@130 -- # create_subsystem_and_connect 00:16:14.726 11:39:44 -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:16:14.726 11:39:44 -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:16:14.726 11:39:44 -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:14.726 11:39:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.726 11:39:44 -- common/autotest_common.sh@10 -- # set +x 00:16:14.726 [2024-05-15 11:39:44.110702] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1171930/0x1175e20) succeed. 00:16:14.726 [2024-05-15 11:39:44.119787] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1172e30/0x11b74b0) succeed. 00:16:14.726 11:39:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.726 11:39:44 -- target/device_removal.sh@49 -- # get_rdma_if_list 00:16:14.726 11:39:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:14.726 11:39:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:14.726 11:39:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:14.726 11:39:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:14.726 11:39:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:14.726 11:39:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:14.726 11:39:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:14.726 11:39:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:14.726 11:39:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:14.726 11:39:44 -- nvmf/common.sh@105 -- # continue 2 00:16:14.726 11:39:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:14.726 11:39:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:14.726 11:39:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:14.726 11:39:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:14.726 11:39:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:14.726 11:39:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:14.726 11:39:44 -- nvmf/common.sh@105 -- # continue 2 00:16:14.726 11:39:44 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:16:14.726 11:39:44 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:16:14.726 11:39:44 -- target/device_removal.sh@25 -- # local -a dev_name 00:16:14.726 11:39:44 -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:16:14.726 11:39:44 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:16:14.726 11:39:44 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:16:14.726 11:39:44 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:16:14.726 11:39:44 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:16:14.726 11:39:44 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:16:14.726 11:39:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:14.726 11:39:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:14.726 11:39:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:14.726 11:39:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:14.726 11:39:44 -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:16:14.726 11:39:44 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:16:14.726 11:39:44 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:16:14.726 11:39:44 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:16:14.726 11:39:44 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:16:14.726 11:39:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.726 11:39:44 -- common/autotest_common.sh@10 -- # set +x 00:16:14.726 11:39:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.726 11:39:44 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:16:14.726 11:39:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.726 11:39:44 -- common/autotest_common.sh@10 -- # set +x 00:16:14.726 11:39:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.726 11:39:44 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:16:14.726 11:39:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.726 11:39:44 -- common/autotest_common.sh@10 -- # set +x 00:16:14.726 11:39:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.726 11:39:44 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:16:14.726 11:39:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.726 11:39:44 -- common/autotest_common.sh@10 -- # set +x 00:16:14.726 [2024-05-15 11:39:44.310098] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:14.726 [2024-05-15 11:39:44.310471] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:14.726 11:39:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.726 11:39:44 -- target/device_removal.sh@41 -- # return 0 00:16:14.726 11:39:44 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:16:14.726 11:39:44 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:16:14.726 11:39:44 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:16:14.726 11:39:44 -- target/device_removal.sh@25 -- # local -a dev_name 00:16:14.726 11:39:44 -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:16:14.726 11:39:44 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:16:14.726 11:39:44 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:16:14.726 11:39:44 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:16:14.726 11:39:44 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:16:14.726 11:39:44 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:16:14.726 11:39:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:14.726 11:39:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:14.726 11:39:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:14.726 11:39:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:14.726 11:39:44 -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:16:14.726 11:39:44 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:16:14.726 11:39:44 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:16:14.726 11:39:44 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:16:14.726 11:39:44 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:16:14.726 11:39:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.726 11:39:44 -- common/autotest_common.sh@10 -- # set +x 00:16:14.726 11:39:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.726 11:39:44 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:16:14.726 11:39:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.726 11:39:44 -- common/autotest_common.sh@10 -- # set +x 00:16:14.726 11:39:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.726 11:39:44 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:16:14.726 11:39:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.726 11:39:44 -- common/autotest_common.sh@10 -- # set +x 00:16:14.726 11:39:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.726 11:39:44 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:16:14.726 11:39:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.726 11:39:44 -- common/autotest_common.sh@10 -- # set +x 00:16:14.726 [2024-05-15 11:39:44.395579] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:16:14.726 11:39:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.726 11:39:44 -- target/device_removal.sh@41 -- # return 0 00:16:14.726 11:39:44 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:16:14.726 11:39:44 -- target/device_removal.sh@53 -- # return 0 00:16:14.726 11:39:44 -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:16:14.726 11:39:44 -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:16:14.726 11:39:44 -- target/device_removal.sh@87 -- # local dev_names 00:16:14.726 11:39:44 -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:14.726 11:39:44 -- target/device_removal.sh@91 -- # bdevperf_pid=3035834 00:16:14.726 11:39:44 -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:14.726 11:39:44 -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:14.726 11:39:44 -- target/device_removal.sh@94 -- # waitforlisten 3035834 /var/tmp/bdevperf.sock 00:16:14.726 11:39:44 -- common/autotest_common.sh@827 -- # '[' -z 3035834 ']' 00:16:14.726 11:39:44 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:14.726 11:39:44 -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:14.726 11:39:44 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:14.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:14.726 11:39:44 -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:14.726 11:39:44 -- common/autotest_common.sh@10 -- # set +x 00:16:14.726 11:39:45 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:14.726 11:39:45 -- common/autotest_common.sh@860 -- # return 0 00:16:14.726 11:39:45 -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:14.726 11:39:45 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.726 11:39:45 -- common/autotest_common.sh@10 -- # set +x 00:16:14.726 11:39:45 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.726 11:39:45 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:16:14.726 11:39:45 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:16:14.726 11:39:45 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:16:14.726 11:39:45 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:16:14.726 11:39:45 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:16:14.727 11:39:45 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:14.727 11:39:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:14.727 11:39:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:14.727 11:39:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:14.727 11:39:45 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:16:14.727 11:39:45 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:16:14.727 11:39:45 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.727 11:39:45 -- common/autotest_common.sh@10 -- # set +x 00:16:14.727 Nvme_mlx_0_0n1 00:16:14.727 11:39:45 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.727 11:39:45 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:16:14.727 11:39:45 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:16:14.727 11:39:45 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:16:14.727 11:39:45 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:16:14.727 11:39:45 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:16:14.727 11:39:45 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:14.727 11:39:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:14.727 11:39:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:14.727 11:39:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:14.727 11:39:45 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:16:14.727 11:39:45 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:16:14.727 11:39:45 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.727 11:39:45 -- common/autotest_common.sh@10 -- # set +x 00:16:14.986 Nvme_mlx_0_1n1 00:16:14.986 11:39:45 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.986 11:39:45 -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=3035976 00:16:14.986 11:39:45 -- target/device_removal.sh@112 -- # sleep 5 00:16:14.986 11:39:45 -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:20.266 11:39:50 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:16:20.266 11:39:50 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:16:20.266 11:39:50 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:16:20.266 11:39:50 -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:16:20.266 11:39:50 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:16:20.266 11:39:50 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:16:20.266 11:39:50 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:16:20.266 11:39:50 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/infiniband 00:16:20.266 11:39:50 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:16:20.266 11:39:50 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:16:20.266 11:39:50 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:20.266 11:39:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:20.266 11:39:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:20.266 11:39:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:20.266 11:39:50 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:16:20.266 11:39:50 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:16:20.266 11:39:50 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:16:20.266 11:39:50 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:16:20.266 11:39:50 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0 00:16:20.266 11:39:50 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:16:20.266 11:39:50 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:16:20.266 11:39:50 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:16:20.266 11:39:50 -- target/device_removal.sh@77 -- # grep mlx5_0 00:16:20.266 11:39:50 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:16:20.266 11:39:50 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.266 11:39:50 -- common/autotest_common.sh@10 -- # set +x 00:16:20.266 11:39:50 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.266 mlx5_0 00:16:20.266 11:39:50 -- target/device_removal.sh@78 -- # return 0 00:16:20.266 11:39:50 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:16:20.266 11:39:50 -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:16:20.266 11:39:50 -- target/device_removal.sh@67 -- # echo 1 00:16:20.266 11:39:50 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:16:20.266 11:39:50 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:16:20.266 11:39:50 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:16:20.266 [2024-05-15 11:39:50.648820] rdma.c:3563:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:16:20.266 [2024-05-15 11:39:50.649657] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:16:20.266 [2024-05-15 11:39:50.651176] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:20.266 [2024-05-15 11:39:50.651201] rdma.c: 845:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 96 00:16:25.542 11:39:56 -- target/device_removal.sh@147 -- # seq 1 10 00:16:25.542 11:39:56 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:16:25.542 11:39:56 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:16:25.542 11:39:56 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:16:25.542 11:39:56 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:16:25.542 11:39:56 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:16:25.542 11:39:56 -- target/device_removal.sh@77 -- # grep mlx5_0 00:16:25.542 11:39:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.542 11:39:56 -- common/autotest_common.sh@10 -- # set +x 00:16:25.542 11:39:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.801 11:39:56 -- target/device_removal.sh@78 -- # return 1 00:16:25.801 11:39:56 -- target/device_removal.sh@149 -- # break 00:16:25.801 11:39:56 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:16:25.801 11:39:56 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:16:25.801 11:39:56 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:16:25.801 11:39:56 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:16:25.801 11:39:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.801 11:39:56 -- common/autotest_common.sh@10 -- # set +x 00:16:25.801 11:39:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.801 11:39:56 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:16:25.801 11:39:56 -- target/device_removal.sh@160 -- # rescan_pci 00:16:25.801 11:39:56 -- target/device_removal.sh@57 -- # echo 1 00:16:26.738 [2024-05-15 11:39:57.254569] rdma.c:3252:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x13f3cb0, err 11. Skip rescan. 00:16:26.738 [2024-05-15 11:39:57.260214] rdma.c:3252:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x13f3cb0, err 11. Skip rescan. 00:16:26.738 11:39:57 -- target/device_removal.sh@162 -- # seq 1 10 00:16:26.738 11:39:57 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:16:26.738 11:39:57 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/net 00:16:26.738 11:39:57 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:16:26.738 11:39:57 -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:16:26.738 11:39:57 -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:16:26.738 11:39:57 -- target/device_removal.sh@171 -- # break 00:16:26.738 11:39:57 -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:16:26.738 11:39:57 -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:16:26.996 [2024-05-15 11:39:57.652681] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1174850/0x1175e20) succeed. 00:16:26.996 [2024-05-15 11:39:57.652750] rdma.c:3305:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:16:30.418 11:40:00 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:16:30.418 11:40:00 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:30.418 11:40:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:30.418 11:40:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:30.418 11:40:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:30.418 11:40:00 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:16:30.418 11:40:00 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:16:30.418 11:40:00 -- target/device_removal.sh@186 -- # seq 1 10 00:16:30.419 11:40:00 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:16:30.419 11:40:00 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:16:30.419 11:40:00 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:16:30.419 11:40:00 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:16:30.419 11:40:00 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:16:30.419 11:40:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.419 11:40:00 -- common/autotest_common.sh@10 -- # set +x 00:16:30.419 [2024-05-15 11:40:00.650878] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:30.419 [2024-05-15 11:40:00.650917] rdma.c:3311:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:16:30.419 [2024-05-15 11:40:00.650935] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:16:30.419 [2024-05-15 11:40:00.650951] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:16:30.419 11:40:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.419 11:40:00 -- target/device_removal.sh@187 -- # ib_count=2 00:16:30.419 11:40:00 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:16:30.419 11:40:00 -- target/device_removal.sh@189 -- # break 00:16:30.419 11:40:00 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:16:30.419 11:40:00 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:16:30.419 11:40:00 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:16:30.419 11:40:00 -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:16:30.419 11:40:00 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:16:30.419 11:40:00 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:16:30.419 11:40:00 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:16:30.419 11:40:00 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/infiniband 00:16:30.419 11:40:00 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:16:30.419 11:40:00 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:16:30.419 11:40:00 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:30.419 11:40:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:30.419 11:40:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:30.419 11:40:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:30.419 11:40:00 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:16:30.419 11:40:00 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:16:30.419 11:40:00 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:16:30.419 11:40:00 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:16:30.419 11:40:00 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1 00:16:30.419 11:40:00 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:16:30.419 11:40:00 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:16:30.419 11:40:00 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:16:30.419 11:40:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.419 11:40:00 -- common/autotest_common.sh@10 -- # set +x 00:16:30.419 11:40:00 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:16:30.419 11:40:00 -- target/device_removal.sh@77 -- # grep mlx5_1 00:16:30.419 11:40:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.419 mlx5_1 00:16:30.419 11:40:00 -- target/device_removal.sh@78 -- # return 0 00:16:30.419 11:40:00 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:16:30.419 11:40:00 -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:16:30.419 11:40:00 -- target/device_removal.sh@67 -- # echo 1 00:16:30.419 11:40:00 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:16:30.419 11:40:00 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:16:30.419 11:40:00 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:16:30.419 [2024-05-15 11:40:00.815147] rdma.c:3563:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:16:30.419 [2024-05-15 11:40:00.815225] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:30.419 [2024-05-15 11:40:00.818177] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:30.419 [2024-05-15 11:40:00.818199] rdma.c: 845:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 127 00:16:37.019 11:40:06 -- target/device_removal.sh@147 -- # seq 1 10 00:16:37.019 11:40:06 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:16:37.019 11:40:06 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:16:37.019 11:40:06 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:16:37.019 11:40:06 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:16:37.019 11:40:06 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:16:37.019 11:40:06 -- target/device_removal.sh@77 -- # grep mlx5_1 00:16:37.019 11:40:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.019 11:40:06 -- common/autotest_common.sh@10 -- # set +x 00:16:37.019 11:40:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.019 11:40:06 -- target/device_removal.sh@78 -- # return 1 00:16:37.019 11:40:06 -- target/device_removal.sh@149 -- # break 00:16:37.019 11:40:06 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:16:37.019 11:40:06 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:16:37.019 11:40:06 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:16:37.019 11:40:06 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:16:37.019 11:40:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.019 11:40:06 -- common/autotest_common.sh@10 -- # set +x 00:16:37.019 11:40:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.020 11:40:06 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:16:37.020 11:40:06 -- target/device_removal.sh@160 -- # rescan_pci 00:16:37.020 11:40:06 -- target/device_removal.sh@57 -- # echo 1 00:16:37.020 [2024-05-15 11:40:07.638327] rdma.c:3252:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x123f0a0, err 11. Skip rescan. 00:16:37.020 [2024-05-15 11:40:07.643679] rdma.c:3252:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x123f0a0, err 11. Skip rescan. 00:16:37.020 11:40:07 -- target/device_removal.sh@162 -- # seq 1 10 00:16:37.020 11:40:07 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:16:37.020 11:40:07 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/net 00:16:37.290 11:40:07 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:16:37.290 11:40:07 -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:16:37.290 11:40:07 -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:16:37.290 11:40:07 -- target/device_removal.sh@171 -- # break 00:16:37.290 11:40:07 -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:16:37.290 11:40:07 -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:16:37.290 [2024-05-15 11:40:08.040110] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1174c40/0x11b74b0) succeed. 00:16:37.290 [2024-05-15 11:40:08.040204] rdma.c:3305:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:16:40.579 11:40:11 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:16:40.579 11:40:11 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:40.579 11:40:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:40.579 11:40:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:40.579 11:40:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:40.579 11:40:11 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:16:40.579 11:40:11 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:16:40.579 11:40:11 -- target/device_removal.sh@186 -- # seq 1 10 00:16:40.579 11:40:11 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:16:40.579 11:40:11 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:16:40.579 11:40:11 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:16:40.579 11:40:11 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:16:40.579 11:40:11 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:16:40.579 11:40:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.579 11:40:11 -- common/autotest_common.sh@10 -- # set +x 00:16:40.579 [2024-05-15 11:40:11.083642] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:16:40.579 [2024-05-15 11:40:11.083681] rdma.c:3311:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:16:40.579 [2024-05-15 11:40:11.083701] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:16:40.579 [2024-05-15 11:40:11.083719] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:16:40.579 11:40:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.579 11:40:11 -- target/device_removal.sh@187 -- # ib_count=2 00:16:40.579 11:40:11 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:16:40.579 11:40:11 -- target/device_removal.sh@189 -- # break 00:16:40.579 11:40:11 -- target/device_removal.sh@200 -- # stop_bdevperf 00:16:40.579 11:40:11 -- target/device_removal.sh@116 -- # wait 3035976 00:17:48.290 0 00:17:48.290 11:41:15 -- target/device_removal.sh@118 -- # killprocess 3035834 00:17:48.290 11:41:15 -- common/autotest_common.sh@946 -- # '[' -z 3035834 ']' 00:17:48.290 11:41:15 -- common/autotest_common.sh@950 -- # kill -0 3035834 00:17:48.290 11:41:15 -- common/autotest_common.sh@951 -- # uname 00:17:48.290 11:41:15 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:48.290 11:41:15 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3035834 00:17:48.290 11:41:15 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:48.290 11:41:15 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:48.290 11:41:15 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3035834' 00:17:48.290 killing process with pid 3035834 00:17:48.290 11:41:15 -- common/autotest_common.sh@965 -- # kill 3035834 00:17:48.290 11:41:15 -- common/autotest_common.sh@970 -- # wait 3035834 00:17:48.290 11:41:16 -- target/device_removal.sh@119 -- # bdevperf_pid= 00:17:48.290 11:41:16 -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:17:48.290 [2024-05-15 11:39:44.451307] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:17:48.290 [2024-05-15 11:39:44.451363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035834 ] 00:17:48.290 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.290 [2024-05-15 11:39:44.518472] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.290 [2024-05-15 11:39:44.598932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.290 Running I/O for 90 seconds... 00:17:48.290 [2024-05-15 11:39:50.651472] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:48.290 [2024-05-15 11:39:50.651507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.290 [2024-05-15 11:39:50.651521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32689 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:17:48.290 [2024-05-15 11:39:50.651534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.290 [2024-05-15 11:39:50.651546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32689 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:17:48.290 [2024-05-15 11:39:50.651558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.290 [2024-05-15 11:39:50.651569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32689 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:17:48.290 [2024-05-15 11:39:50.651581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.290 [2024-05-15 11:39:50.651592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32689 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:17:48.290 [2024-05-15 11:39:50.653415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:48.290 [2024-05-15 11:39:50.653461] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:48.290 [2024-05-15 11:39:50.653558] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:48.290 [2024-05-15 11:39:50.661675] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.671703] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.681728] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.691755] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.701783] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.711809] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.721834] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.731862] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.741890] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.752134] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.762391] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.772640] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.782909] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.793188] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.803386] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.813608] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.823780] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.834023] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.844336] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.854557] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.864789] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.290 [2024-05-15 11:39:50.874991] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:50.885272] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:50.895442] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:50.905679] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:50.915951] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:50.926154] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:50.936334] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:50.946710] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:50.956945] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:50.967167] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:50.977353] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:50.987564] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:50.997976] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.008198] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.018358] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.028565] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.038725] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.048946] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.059136] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.069365] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.079695] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.089849] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.100077] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.110350] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.120546] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.130806] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.141014] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.151196] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.161434] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.171640] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.181924] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.192124] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.202255] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.212451] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.222623] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.232801] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.242963] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.253159] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.263378] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.273521] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.283737] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.294033] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.304344] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.314618] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.324912] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.335194] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.345508] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.355782] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.366082] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.376446] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.386741] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.396991] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.407286] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.417559] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.427790] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.438027] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.448213] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.458437] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.469096] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.479121] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.489146] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.499185] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.509401] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.519634] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.529900] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.540062] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.550295] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.560951] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.571152] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.581374] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.591658] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.601683] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.611844] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.621969] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.632162] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.642756] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.652782] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.291 [2024-05-15 11:39:51.656038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:201608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e2000 len:0x1000 key:0x1810ef 00:17:48.291 [2024-05-15 11:39:51.656057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:201616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e4000 len:0x1000 key:0x1810ef 00:17:48.292 [2024-05-15 11:39:51.656094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:201624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e6000 len:0x1000 key:0x1810ef 00:17:48.292 [2024-05-15 11:39:51.656116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:201632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e8000 len:0x1000 key:0x1810ef 00:17:48.292 [2024-05-15 11:39:51.656138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:201640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ea000 len:0x1000 key:0x1810ef 00:17:48.292 [2024-05-15 11:39:51.656159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:201648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ec000 len:0x1000 key:0x1810ef 00:17:48.292 [2024-05-15 11:39:51.656180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:201656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ee000 len:0x1000 key:0x1810ef 00:17:48.292 [2024-05-15 11:39:51.656200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:201664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f0000 len:0x1000 key:0x1810ef 00:17:48.292 [2024-05-15 11:39:51.656221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:201672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f2000 len:0x1000 key:0x1810ef 00:17:48.292 [2024-05-15 11:39:51.656242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:201680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f4000 len:0x1000 key:0x1810ef 00:17:48.292 [2024-05-15 11:39:51.656263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:201688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f6000 len:0x1000 key:0x1810ef 00:17:48.292 [2024-05-15 11:39:51.656283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:201696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f8000 len:0x1000 key:0x1810ef 00:17:48.292 [2024-05-15 11:39:51.656304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:201704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fa000 len:0x1000 key:0x1810ef 00:17:48.292 [2024-05-15 11:39:51.656328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:201712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fc000 len:0x1000 key:0x1810ef 00:17:48.292 [2024-05-15 11:39:51.656349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:201720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fe000 len:0x1000 key:0x1810ef 00:17:48.292 [2024-05-15 11:39:51.656370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:201728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:201736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:201744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:201752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:201760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:201768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:201776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:201784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:201792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:201800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:201808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:201816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:201824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:201832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:201840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:201848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:201856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:201864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:201872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.292 [2024-05-15 11:39:51.656772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.292 [2024-05-15 11:39:51.656782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:201880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.656792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.656802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:201888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.656811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.656823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:201896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.656834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.656845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:201904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.656855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.656865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:201912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.656874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.656886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:201920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.656896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.656907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:201928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.656916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.656926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:201936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.656936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.656948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:201944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.656958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.656969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:201952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.656978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.656989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:201960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.656998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:201968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:201976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:201984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:201992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:202000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:202008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:202016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:202024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:202032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:202040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:202048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:202056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:202064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:202072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:202080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:202088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:202096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:202104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:202112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:202120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:202128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:202136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:202144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:202152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.293 [2024-05-15 11:39:51.657512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:202160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.293 [2024-05-15 11:39:51.657522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:202168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:202176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:202184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:202192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:202200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:202208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:202216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:202224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:202232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:202240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:202248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:202256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:202264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:202272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:202280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:202288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:202296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:202304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:202312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:202320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:202328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:202336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.657981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.657992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:202344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.658002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.658013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:202352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.658022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.658032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:202360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.658041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.658052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:202368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.658073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.658085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:202376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.658094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.658105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:202384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.658116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.658126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:202392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.658136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.658147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:202400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.658156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.658167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:202408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.658176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.658188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:202416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.294 [2024-05-15 11:39:51.658198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.294 [2024-05-15 11:39:51.658209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:202424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:202432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:202440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:202448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:202456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:202464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:202472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:202480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:202488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:202496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:202504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:202512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:202520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:202528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:202536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:202544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:202552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:202560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:202568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:202576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:202584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.295 [2024-05-15 11:39:51.658644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:202592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.295 [2024-05-15 11:39:51.658653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.296 [2024-05-15 11:39:51.658663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:202600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.296 [2024-05-15 11:39:51.658673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.296 [2024-05-15 11:39:51.658684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:202608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.296 [2024-05-15 11:39:51.658694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.296 [2024-05-15 11:39:51.658704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:202616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.296 [2024-05-15 11:39:51.658713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.296 [2024-05-15 11:39:51.671738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:48.296 [2024-05-15 11:39:51.671755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:48.296 [2024-05-15 11:39:51.671764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:202624 len:8 PRP1 0x0 PRP2 0x0 00:17:48.296 [2024-05-15 11:39:51.671775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.296 [2024-05-15 11:39:51.673280] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:48.296 [2024-05-15 11:39:51.673562] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:17:48.296 [2024-05-15 11:39:51.673579] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:48.296 [2024-05-15 11:39:51.673587] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:17:48.296 [2024-05-15 11:39:51.673607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:48.296 [2024-05-15 11:39:51.673618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:48.296 [2024-05-15 11:39:51.673630] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:17:48.296 [2024-05-15 11:39:51.673640] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:17:48.296 [2024-05-15 11:39:51.673650] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:17:48.296 [2024-05-15 11:39:51.673671] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:48.296 [2024-05-15 11:39:51.673682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:48.296 [2024-05-15 11:39:52.676190] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:17:48.296 [2024-05-15 11:39:52.676239] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:48.296 [2024-05-15 11:39:52.676248] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:17:48.296 [2024-05-15 11:39:52.676269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:48.296 [2024-05-15 11:39:52.676280] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:48.296 [2024-05-15 11:39:52.676292] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:17:48.296 [2024-05-15 11:39:52.676302] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:17:48.296 [2024-05-15 11:39:52.676312] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:17:48.296 [2024-05-15 11:39:52.676337] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:48.296 [2024-05-15 11:39:52.676348] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:48.296 [2024-05-15 11:39:53.678867] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:17:48.296 [2024-05-15 11:39:53.678906] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:48.296 [2024-05-15 11:39:53.678916] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:17:48.296 [2024-05-15 11:39:53.678941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:48.296 [2024-05-15 11:39:53.678955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:48.296 [2024-05-15 11:39:53.678968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:17:48.296 [2024-05-15 11:39:53.678979] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:17:48.296 [2024-05-15 11:39:53.678991] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:17:48.296 [2024-05-15 11:39:53.679017] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:48.296 [2024-05-15 11:39:53.679030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:48.296 [2024-05-15 11:39:55.684402] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:48.296 [2024-05-15 11:39:55.684441] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:17:48.296 [2024-05-15 11:39:55.684468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:48.296 [2024-05-15 11:39:55.684479] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:48.296 [2024-05-15 11:39:55.684493] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:17:48.296 [2024-05-15 11:39:55.684502] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:17:48.296 [2024-05-15 11:39:55.684513] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:17:48.296 [2024-05-15 11:39:55.684545] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:48.296 [2024-05-15 11:39:55.684556] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:48.296 [2024-05-15 11:39:57.689477] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:48.296 [2024-05-15 11:39:57.689514] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:17:48.296 [2024-05-15 11:39:57.689549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:48.296 [2024-05-15 11:39:57.689560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:48.296 [2024-05-15 11:39:57.689574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:17:48.296 [2024-05-15 11:39:57.689584] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:17:48.296 [2024-05-15 11:39:57.689595] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:17:48.296 [2024-05-15 11:39:57.689623] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:48.296 [2024-05-15 11:39:57.689635] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:48.296 [2024-05-15 11:39:59.694562] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:48.296 [2024-05-15 11:39:59.694594] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:17:48.296 [2024-05-15 11:39:59.694619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:48.296 [2024-05-15 11:39:59.694630] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:48.296 [2024-05-15 11:39:59.694643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:17:48.296 [2024-05-15 11:39:59.694653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:17:48.296 [2024-05-15 11:39:59.694664] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:17:48.296 [2024-05-15 11:39:59.694690] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:48.296 [2024-05-15 11:39:59.694700] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:48.296 [2024-05-15 11:40:00.758411] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:48.296 [2024-05-15 11:40:00.813808] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:48.296 [2024-05-15 11:40:00.813836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.296 [2024-05-15 11:40:00.813849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32689 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:17:48.296 [2024-05-15 11:40:00.813860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.296 [2024-05-15 11:40:00.813870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32689 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:17:48.296 [2024-05-15 11:40:00.813881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.296 [2024-05-15 11:40:00.813890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32689 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:17:48.297 [2024-05-15 11:40:00.813900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.297 [2024-05-15 11:40:00.813909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32689 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:17:48.297 [2024-05-15 11:40:00.815647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:48.297 [2024-05-15 11:40:00.815664] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:48.297 [2024-05-15 11:40:00.815691] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:48.297 [2024-05-15 11:40:00.823822] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:00.833839] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:00.843863] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:00.853890] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:00.863915] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:00.873943] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:00.883968] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:00.893993] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:00.904018] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:00.914044] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:00.924069] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:00.934094] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:00.944119] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:00.954146] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:00.964172] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:00.974196] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:00.984221] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:00.994247] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.004274] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.014298] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.024324] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.034350] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.044377] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.054405] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.064431] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.074459] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.084484] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.094508] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.104533] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.114559] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.124586] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.134611] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.144636] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.154663] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.164690] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.174715] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.184742] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.194766] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.204792] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.214818] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.224844] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.234870] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.244896] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.254922] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.264947] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.274975] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.285002] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.295030] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.305061] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.315082] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.325109] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.335135] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.345163] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.355189] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.365215] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.375241] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.385266] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.395294] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.405320] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.415348] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.425374] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.435399] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.445426] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.455450] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.465478] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.475502] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.485530] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.495557] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.505583] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.515610] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.525637] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.297 [2024-05-15 11:40:01.535663] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.545691] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.555717] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.565743] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.575770] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.585798] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.595823] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.605851] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.615879] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.625905] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.635929] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.645957] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.655983] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.666009] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.676034] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.686062] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.696087] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.706189] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.716217] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.726527] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.736546] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.746573] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.757362] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.767389] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.777729] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.788557] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.798582] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.809498] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:48.298 [2024-05-15 11:40:01.818150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.298 [2024-05-15 11:40:01.818569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.298 [2024-05-15 11:40:01.818579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.818980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.818989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.819000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.819010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.819022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.819031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.819042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.819051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.819067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.819077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.819088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.819099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.819110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.819119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.819130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.819139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.819150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.819159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.819171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.819180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.819191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.819200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.819211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.819221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.819233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.819242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.819253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.819262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.819273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.819283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.299 [2024-05-15 11:40:01.819294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:48.299 [2024-05-15 11:40:01.819303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fe000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fc000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fa000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f8000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f6000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f4000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f2000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f0000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ee000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ec000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ea000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e8000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e6000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e4000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e2000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e0000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079de000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079dc000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079da000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d8000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d6000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d4000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d2000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d0000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ce000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.300 [2024-05-15 11:40:01.819837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079cc000 len:0x1000 key:0x1bf0ef 00:17:48.300 [2024-05-15 11:40:01.819846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.819857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ca000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.819867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.819878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c8000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.819888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.819899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c6000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.819908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.819919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c4000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.819928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.819940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c2000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.819949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.819961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c0000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.819970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.819981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079be000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.819991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079bc000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ba000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b8000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b6000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b4000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b2000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b0000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ae000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ac000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079aa000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a8000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a6000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a4000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a2000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a0000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799e000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799c000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799a000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007998000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007996000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007994000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007992000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007990000 len:0x1000 key:0x1bf0ef 00:17:48.301 [2024-05-15 11:40:01.820481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.301 [2024-05-15 11:40:01.820492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798e000 len:0x1000 key:0x1bf0ef 00:17:48.302 [2024-05-15 11:40:01.820501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.302 [2024-05-15 11:40:01.820512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798c000 len:0x1000 key:0x1bf0ef 00:17:48.302 [2024-05-15 11:40:01.820522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.302 [2024-05-15 11:40:01.820534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798a000 len:0x1000 key:0x1bf0ef 00:17:48.302 [2024-05-15 11:40:01.820544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.302 [2024-05-15 11:40:01.820556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007988000 len:0x1000 key:0x1bf0ef 00:17:48.302 [2024-05-15 11:40:01.820566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.302 [2024-05-15 11:40:01.820577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007986000 len:0x1000 key:0x1bf0ef 00:17:48.302 [2024-05-15 11:40:01.820586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.302 [2024-05-15 11:40:01.820597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007984000 len:0x1000 key:0x1bf0ef 00:17:48.302 [2024-05-15 11:40:01.820607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.302 [2024-05-15 11:40:01.820619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007982000 len:0x1000 key:0x1bf0ef 00:17:48.302 [2024-05-15 11:40:01.820628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.302 [2024-05-15 11:40:01.820640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007980000 len:0x1000 key:0x1bf0ef 00:17:48.302 [2024-05-15 11:40:01.820649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.302 [2024-05-15 11:40:01.820661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797e000 len:0x1000 key:0x1bf0ef 00:17:48.302 [2024-05-15 11:40:01.820674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.302 [2024-05-15 11:40:01.820685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797c000 len:0x1000 key:0x1bf0ef 00:17:48.302 [2024-05-15 11:40:01.820694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.302 [2024-05-15 11:40:01.820706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797a000 len:0x1000 key:0x1bf0ef 00:17:48.302 [2024-05-15 11:40:01.820715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.302 [2024-05-15 11:40:01.820727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007978000 len:0x1000 key:0x1bf0ef 00:17:48.302 [2024-05-15 11:40:01.820738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.302 [2024-05-15 11:40:01.820749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007976000 len:0x1000 key:0x1bf0ef 00:17:48.302 [2024-05-15 11:40:01.820758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.302 [2024-05-15 11:40:01.820774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007974000 len:0x1000 key:0x1bf0ef 00:17:48.302 [2024-05-15 11:40:01.820783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.302 [2024-05-15 11:40:01.820795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007972000 len:0x1000 key:0x1bf0ef 00:17:48.302 [2024-05-15 11:40:01.820804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.302 [2024-05-15 11:40:01.820815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007970000 len:0x1000 key:0x1bf0ef 00:17:48.302 [2024-05-15 11:40:01.820824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32689 cdw0:48c5c8d0 sqhd:2530 p:0 m:0 dnr:0 00:17:48.302 [2024-05-15 11:40:01.833828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:48.302 [2024-05-15 11:40:01.833844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:48.302 [2024-05-15 11:40:01.833853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:8 PRP1 0x0 PRP2 0x0 00:17:48.302 [2024-05-15 11:40:01.833863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.302 [2024-05-15 11:40:01.833912] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:48.302 [2024-05-15 11:40:01.835696] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:17:48.302 [2024-05-15 11:40:01.835716] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:48.302 [2024-05-15 11:40:01.835725] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:17:48.302 [2024-05-15 11:40:01.835743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:48.302 [2024-05-15 11:40:01.835754] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:48.302 [2024-05-15 11:40:01.835766] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:17:48.302 [2024-05-15 11:40:01.835776] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:17:48.302 [2024-05-15 11:40:01.835786] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:17:48.302 [2024-05-15 11:40:01.835807] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:48.302 [2024-05-15 11:40:01.835817] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:48.302 [2024-05-15 11:40:02.838464] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:17:48.302 [2024-05-15 11:40:02.838507] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:48.302 [2024-05-15 11:40:02.838516] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:17:48.302 [2024-05-15 11:40:02.838536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:48.302 [2024-05-15 11:40:02.838547] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:48.302 [2024-05-15 11:40:02.838560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:17:48.302 [2024-05-15 11:40:02.838576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:17:48.302 [2024-05-15 11:40:02.838586] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:17:48.302 [2024-05-15 11:40:02.838963] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:48.302 [2024-05-15 11:40:02.838979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:48.302 [2024-05-15 11:40:03.841580] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:17:48.302 [2024-05-15 11:40:03.841620] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:48.302 [2024-05-15 11:40:03.841630] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:17:48.302 [2024-05-15 11:40:03.841651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:48.302 [2024-05-15 11:40:03.841662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:48.302 [2024-05-15 11:40:03.841675] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:17:48.302 [2024-05-15 11:40:03.841685] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:17:48.302 [2024-05-15 11:40:03.841695] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:17:48.302 [2024-05-15 11:40:03.841720] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:48.303 [2024-05-15 11:40:03.841731] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:48.303 [2024-05-15 11:40:05.847036] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:48.303 [2024-05-15 11:40:05.847083] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:17:48.303 [2024-05-15 11:40:05.847112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:48.303 [2024-05-15 11:40:05.847123] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:48.303 [2024-05-15 11:40:05.847136] bdev_nvme.c:2873:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Reset is already in progress. Defer failover until reset completes. 00:17:48.303 [2024-05-15 11:40:05.847177] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:17:48.303 [2024-05-15 11:40:05.847189] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:17:48.303 [2024-05-15 11:40:05.847200] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:17:48.303 [2024-05-15 11:40:05.847234] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:48.303 [2024-05-15 11:40:05.847260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:48.303 [2024-05-15 11:40:06.849898] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:48.303 [2024-05-15 11:40:06.849933] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:17:48.303 [2024-05-15 11:40:06.849959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:48.303 [2024-05-15 11:40:06.849974] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:48.303 [2024-05-15 11:40:06.849995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:17:48.303 [2024-05-15 11:40:06.850004] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:17:48.303 [2024-05-15 11:40:06.850019] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:17:48.303 [2024-05-15 11:40:06.850050] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:48.303 [2024-05-15 11:40:06.850066] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:48.303 [2024-05-15 11:40:08.856006] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:48.303 [2024-05-15 11:40:08.856051] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:17:48.303 [2024-05-15 11:40:08.856083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:48.303 [2024-05-15 11:40:08.856094] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:48.303 [2024-05-15 11:40:08.857785] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:17:48.303 [2024-05-15 11:40:08.857804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:17:48.303 [2024-05-15 11:40:08.857814] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:17:48.303 [2024-05-15 11:40:08.857857] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:48.303 [2024-05-15 11:40:08.857867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:48.303 [2024-05-15 11:40:10.863381] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:48.303 [2024-05-15 11:40:10.863425] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:17:48.303 [2024-05-15 11:40:10.863453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:48.303 [2024-05-15 11:40:10.863466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:48.303 [2024-05-15 11:40:10.863491] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:17:48.303 [2024-05-15 11:40:10.863501] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:17:48.303 [2024-05-15 11:40:10.863512] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:17:48.303 [2024-05-15 11:40:10.863552] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:48.303 [2024-05-15 11:40:10.863565] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:48.303 [2024-05-15 11:40:11.928715] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:48.303 00:17:48.303 Latency(us) 00:17:48.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.303 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:48.303 Verification LBA range: start 0x0 length 0x8000 00:17:48.303 Nvme_mlx_0_0n1 : 90.01 10972.29 42.86 0.00 0.00 11642.09 1966.08 11087551.44 00:17:48.303 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:48.303 Verification LBA range: start 0x0 length 0x8000 00:17:48.303 Nvme_mlx_0_1n1 : 90.01 9421.91 36.80 0.00 0.00 13566.28 2308.01 12079595.52 00:17:48.303 =================================================================================================================== 00:17:48.303 Total : 20394.19 79.66 0.00 0.00 12531.04 1966.08 12079595.52 00:17:48.303 Received shutdown signal, test time was about 90.000000 seconds 00:17:48.303 00:17:48.303 Latency(us) 00:17:48.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.303 =================================================================================================================== 00:17:48.303 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:48.303 11:41:16 -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:17:48.303 11:41:16 -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:17:48.303 11:41:16 -- target/device_removal.sh@202 -- # killprocess 3035691 00:17:48.303 11:41:16 -- common/autotest_common.sh@946 -- # '[' -z 3035691 ']' 00:17:48.303 11:41:16 -- common/autotest_common.sh@950 -- # kill -0 3035691 00:17:48.303 11:41:16 -- common/autotest_common.sh@951 -- # uname 00:17:48.303 11:41:16 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:48.303 11:41:16 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3035691 00:17:48.303 11:41:16 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:48.303 11:41:16 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:48.303 11:41:16 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3035691' 00:17:48.303 killing process with pid 3035691 00:17:48.303 11:41:16 -- common/autotest_common.sh@965 -- # kill 3035691 00:17:48.303 [2024-05-15 11:41:16.211275] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:48.303 11:41:16 -- common/autotest_common.sh@970 -- # wait 3035691 00:17:48.303 [2024-05-15 11:41:16.265067] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:17:48.303 11:41:16 -- target/device_removal.sh@203 -- # nvmfpid= 00:17:48.303 11:41:16 -- target/device_removal.sh@205 -- # return 0 00:17:48.303 00:17:48.303 real 1m33.386s 00:17:48.303 user 4m25.514s 00:17:48.303 sys 0m4.632s 00:17:48.303 11:41:16 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:48.303 11:41:16 -- common/autotest_common.sh@10 -- # set +x 00:17:48.303 ************************************ 00:17:48.303 END TEST nvmf_device_removal_pci_remove 00:17:48.303 ************************************ 00:17:48.303 11:41:16 -- target/device_removal.sh@317 -- # nvmftestfini 00:17:48.303 11:41:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:48.303 11:41:16 -- nvmf/common.sh@117 -- # sync 00:17:48.303 11:41:16 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:48.303 11:41:16 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:48.303 11:41:16 -- nvmf/common.sh@120 -- # set +e 00:17:48.303 11:41:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:48.303 11:41:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:48.303 rmmod nvme_rdma 00:17:48.303 rmmod nvme_fabrics 00:17:48.303 11:41:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:48.303 11:41:16 -- nvmf/common.sh@124 -- # set -e 00:17:48.303 11:41:16 -- nvmf/common.sh@125 -- # return 0 00:17:48.303 11:41:16 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:17:48.303 11:41:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:48.303 11:41:16 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:17:48.303 11:41:16 -- target/device_removal.sh@318 -- # clean_bond_device 00:17:48.303 11:41:16 -- target/device_removal.sh@240 -- # ip link 00:17:48.303 11:41:16 -- target/device_removal.sh@240 -- # grep bond_nvmf 00:17:48.303 00:17:48.303 real 3m12.989s 00:17:48.303 user 8m52.723s 00:17:48.303 sys 0m13.906s 00:17:48.304 11:41:16 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:48.304 11:41:16 -- common/autotest_common.sh@10 -- # set +x 00:17:48.304 ************************************ 00:17:48.304 END TEST nvmf_device_removal 00:17:48.304 ************************************ 00:17:48.304 11:41:16 -- nvmf/nvmf.sh@79 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:17:48.304 11:41:16 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:48.304 11:41:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:48.304 11:41:16 -- common/autotest_common.sh@10 -- # set +x 00:17:48.304 ************************************ 00:17:48.304 START TEST nvmf_srq_overwhelm 00:17:48.304 ************************************ 00:17:48.304 11:41:16 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:17:48.304 * Looking for test storage... 00:17:48.304 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:48.304 11:41:16 -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:48.304 11:41:16 -- nvmf/common.sh@7 -- # uname -s 00:17:48.304 11:41:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.304 11:41:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.304 11:41:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.304 11:41:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.304 11:41:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.304 11:41:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.304 11:41:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.304 11:41:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.304 11:41:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.304 11:41:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.304 11:41:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:48.304 11:41:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:17:48.304 11:41:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.304 11:41:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.304 11:41:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:48.304 11:41:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.304 11:41:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:48.304 11:41:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.304 11:41:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.304 11:41:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.304 11:41:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.304 11:41:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.304 11:41:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.304 11:41:16 -- paths/export.sh@5 -- # export PATH 00:17:48.304 11:41:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.304 11:41:16 -- nvmf/common.sh@47 -- # : 0 00:17:48.304 11:41:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:48.304 11:41:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:48.304 11:41:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.304 11:41:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.304 11:41:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.304 11:41:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:48.304 11:41:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:48.304 11:41:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:48.304 11:41:16 -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:48.304 11:41:16 -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:48.304 11:41:16 -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:17:48.304 11:41:16 -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:17:48.304 11:41:16 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:17:48.304 11:41:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.304 11:41:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:48.304 11:41:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:48.304 11:41:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:48.304 11:41:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.304 11:41:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.304 11:41:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.304 11:41:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:48.304 11:41:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:48.304 11:41:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:48.304 11:41:16 -- common/autotest_common.sh@10 -- # set +x 00:17:51.597 11:41:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:51.597 11:41:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:51.597 11:41:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:51.597 11:41:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:51.597 11:41:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:51.597 11:41:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:51.597 11:41:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:51.597 11:41:22 -- nvmf/common.sh@295 -- # net_devs=() 00:17:51.597 11:41:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:51.597 11:41:22 -- nvmf/common.sh@296 -- # e810=() 00:17:51.597 11:41:22 -- nvmf/common.sh@296 -- # local -ga e810 00:17:51.597 11:41:22 -- nvmf/common.sh@297 -- # x722=() 00:17:51.597 11:41:22 -- nvmf/common.sh@297 -- # local -ga x722 00:17:51.597 11:41:22 -- nvmf/common.sh@298 -- # mlx=() 00:17:51.597 11:41:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:51.597 11:41:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.597 11:41:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.597 11:41:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.597 11:41:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.597 11:41:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.597 11:41:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.597 11:41:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.597 11:41:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.597 11:41:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.597 11:41:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.597 11:41:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.597 11:41:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:51.597 11:41:22 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:51.597 11:41:22 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:51.597 11:41:22 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:51.597 11:41:22 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:51.597 11:41:22 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:51.597 11:41:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:51.597 11:41:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.597 11:41:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:51.597 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:51.597 11:41:22 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:51.597 11:41:22 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:51.597 11:41:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:51.597 11:41:22 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:51.597 11:41:22 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:51.597 11:41:22 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:51.597 11:41:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.597 11:41:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:51.597 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:51.597 11:41:22 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:51.597 11:41:22 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:51.597 11:41:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:51.597 11:41:22 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:51.597 11:41:22 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:51.597 11:41:22 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:51.597 11:41:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:51.597 11:41:22 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:51.597 11:41:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.597 11:41:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.597 11:41:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:51.597 11:41:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.598 11:41:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:51.598 Found net devices under 0000:18:00.0: mlx_0_0 00:17:51.598 11:41:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.598 11:41:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.598 11:41:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.598 11:41:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:51.598 11:41:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.598 11:41:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:51.598 Found net devices under 0000:18:00.1: mlx_0_1 00:17:51.598 11:41:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.598 11:41:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:51.598 11:41:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:51.598 11:41:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:51.598 11:41:22 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:17:51.598 11:41:22 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:17:51.598 11:41:22 -- nvmf/common.sh@409 -- # rdma_device_init 00:17:51.598 11:41:22 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:17:51.598 11:41:22 -- nvmf/common.sh@58 -- # uname 00:17:51.598 11:41:22 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:51.598 11:41:22 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:51.598 11:41:22 -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:51.598 11:41:22 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:51.598 11:41:22 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:51.598 11:41:22 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:51.598 11:41:22 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:51.598 11:41:22 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:51.598 11:41:22 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:17:51.598 11:41:22 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:51.598 11:41:22 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:51.598 11:41:22 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:51.598 11:41:22 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:51.598 11:41:22 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:51.598 11:41:22 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:51.598 11:41:22 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:51.598 11:41:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:51.598 11:41:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.598 11:41:22 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:51.598 11:41:22 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:51.598 11:41:22 -- nvmf/common.sh@105 -- # continue 2 00:17:51.598 11:41:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:51.598 11:41:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.598 11:41:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:51.598 11:41:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.598 11:41:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:51.598 11:41:22 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:51.598 11:41:22 -- nvmf/common.sh@105 -- # continue 2 00:17:51.598 11:41:22 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:51.598 11:41:22 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:51.598 11:41:22 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:51.598 11:41:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:51.598 11:41:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:51.598 11:41:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:51.598 11:41:22 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:51.598 11:41:22 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:51.598 11:41:22 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:51.598 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:51.598 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:17:51.598 altname enp24s0f0np0 00:17:51.598 altname ens785f0np0 00:17:51.598 inet 192.168.100.8/24 scope global mlx_0_0 00:17:51.598 valid_lft forever preferred_lft forever 00:17:51.598 11:41:22 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:51.598 11:41:22 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:51.598 11:41:22 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:51.598 11:41:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:51.598 11:41:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:51.598 11:41:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:51.598 11:41:22 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:51.598 11:41:22 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:51.598 11:41:22 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:51.598 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:51.598 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:17:51.598 altname enp24s0f1np1 00:17:51.598 altname ens785f1np1 00:17:51.598 inet 192.168.100.9/24 scope global mlx_0_1 00:17:51.598 valid_lft forever preferred_lft forever 00:17:51.598 11:41:22 -- nvmf/common.sh@411 -- # return 0 00:17:51.598 11:41:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:51.598 11:41:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:51.598 11:41:22 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:17:51.598 11:41:22 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:17:51.598 11:41:22 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:51.598 11:41:22 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:51.598 11:41:22 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:51.598 11:41:22 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:51.598 11:41:22 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:51.598 11:41:22 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:51.598 11:41:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:51.598 11:41:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.598 11:41:22 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:51.598 11:41:22 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:51.598 11:41:22 -- nvmf/common.sh@105 -- # continue 2 00:17:51.598 11:41:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:51.598 11:41:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.598 11:41:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:51.598 11:41:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.598 11:41:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:51.598 11:41:22 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:51.598 11:41:22 -- nvmf/common.sh@105 -- # continue 2 00:17:51.598 11:41:22 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:51.598 11:41:22 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:51.598 11:41:22 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:51.598 11:41:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:51.598 11:41:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:51.598 11:41:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:51.598 11:41:22 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:51.598 11:41:22 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:51.598 11:41:22 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:51.598 11:41:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:51.598 11:41:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:51.598 11:41:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:51.598 11:41:22 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:17:51.598 192.168.100.9' 00:17:51.598 11:41:22 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:51.598 192.168.100.9' 00:17:51.598 11:41:22 -- nvmf/common.sh@446 -- # head -n 1 00:17:51.598 11:41:22 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:51.598 11:41:22 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:17:51.598 192.168.100.9' 00:17:51.598 11:41:22 -- nvmf/common.sh@447 -- # tail -n +2 00:17:51.598 11:41:22 -- nvmf/common.sh@447 -- # head -n 1 00:17:51.599 11:41:22 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:51.599 11:41:22 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:17:51.599 11:41:22 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:51.599 11:41:22 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:17:51.599 11:41:22 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:17:51.599 11:41:22 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:17:51.599 11:41:22 -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:17:51.599 11:41:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:51.599 11:41:22 -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:51.599 11:41:22 -- common/autotest_common.sh@10 -- # set +x 00:17:51.599 11:41:22 -- nvmf/common.sh@470 -- # nvmfpid=3050882 00:17:51.599 11:41:22 -- nvmf/common.sh@471 -- # waitforlisten 3050882 00:17:51.599 11:41:22 -- common/autotest_common.sh@827 -- # '[' -z 3050882 ']' 00:17:51.599 11:41:22 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.599 11:41:22 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:51.599 11:41:22 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.599 11:41:22 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:51.599 11:41:22 -- common/autotest_common.sh@10 -- # set +x 00:17:51.599 11:41:22 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:51.599 [2024-05-15 11:41:22.334627] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:17:51.599 [2024-05-15 11:41:22.334682] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.858 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.858 [2024-05-15 11:41:22.404220] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.858 [2024-05-15 11:41:22.487172] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.858 [2024-05-15 11:41:22.487213] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.858 [2024-05-15 11:41:22.487223] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.858 [2024-05-15 11:41:22.487232] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.858 [2024-05-15 11:41:22.487239] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.858 [2024-05-15 11:41:22.487294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.858 [2024-05-15 11:41:22.487313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.858 [2024-05-15 11:41:22.487392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.858 [2024-05-15 11:41:22.487393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.425 11:41:23 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:52.425 11:41:23 -- common/autotest_common.sh@860 -- # return 0 00:17:52.425 11:41:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:52.425 11:41:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:52.425 11:41:23 -- common/autotest_common.sh@10 -- # set +x 00:17:52.685 11:41:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.685 11:41:23 -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:17:52.685 11:41:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.685 11:41:23 -- common/autotest_common.sh@10 -- # set +x 00:17:52.685 [2024-05-15 11:41:23.233315] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18daf00/0x18df3f0) succeed. 00:17:52.685 [2024-05-15 11:41:23.243783] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18dc540/0x1920a80) succeed. 00:17:52.685 11:41:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.685 11:41:23 -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:17:52.685 11:41:23 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:52.685 11:41:23 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:17:52.685 11:41:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.685 11:41:23 -- common/autotest_common.sh@10 -- # set +x 00:17:52.685 11:41:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.685 11:41:23 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:52.685 11:41:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.685 11:41:23 -- common/autotest_common.sh@10 -- # set +x 00:17:52.685 Malloc0 00:17:52.685 11:41:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.685 11:41:23 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:17:52.685 11:41:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.685 11:41:23 -- common/autotest_common.sh@10 -- # set +x 00:17:52.685 11:41:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.685 11:41:23 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:52.685 11:41:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.685 11:41:23 -- common/autotest_common.sh@10 -- # set +x 00:17:52.685 [2024-05-15 11:41:23.348354] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:52.685 [2024-05-15 11:41:23.348744] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:52.685 11:41:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.685 11:41:23 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:17:53.622 11:41:24 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:17:53.622 11:41:24 -- common/autotest_common.sh@1231 -- # local i=0 00:17:53.622 11:41:24 -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:17:53.622 11:41:24 -- common/autotest_common.sh@1232 -- # grep -q -w nvme0n1 00:17:53.622 11:41:24 -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:53.622 11:41:24 -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:53.622 11:41:24 -- common/autotest_common.sh@1242 -- # return 0 00:17:53.622 11:41:24 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:53.622 11:41:24 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:53.622 11:41:24 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.622 11:41:24 -- common/autotest_common.sh@10 -- # set +x 00:17:53.882 11:41:24 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.882 11:41:24 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:53.882 11:41:24 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.882 11:41:24 -- common/autotest_common.sh@10 -- # set +x 00:17:53.882 Malloc1 00:17:53.882 11:41:24 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.882 11:41:24 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:53.882 11:41:24 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.882 11:41:24 -- common/autotest_common.sh@10 -- # set +x 00:17:53.882 11:41:24 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.882 11:41:24 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:53.882 11:41:24 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.882 11:41:24 -- common/autotest_common.sh@10 -- # set +x 00:17:53.882 11:41:24 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.882 11:41:24 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:54.819 11:41:25 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:17:54.819 11:41:25 -- common/autotest_common.sh@1231 -- # local i=0 00:17:54.819 11:41:25 -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:17:54.819 11:41:25 -- common/autotest_common.sh@1232 -- # grep -q -w nvme1n1 00:17:54.819 11:41:25 -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:54.819 11:41:25 -- common/autotest_common.sh@1238 -- # grep -q -w nvme1n1 00:17:54.819 11:41:25 -- common/autotest_common.sh@1242 -- # return 0 00:17:54.819 11:41:25 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:54.819 11:41:25 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:54.819 11:41:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.819 11:41:25 -- common/autotest_common.sh@10 -- # set +x 00:17:54.819 11:41:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.819 11:41:25 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:54.819 11:41:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.819 11:41:25 -- common/autotest_common.sh@10 -- # set +x 00:17:54.819 Malloc2 00:17:54.819 11:41:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.819 11:41:25 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:54.819 11:41:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.819 11:41:25 -- common/autotest_common.sh@10 -- # set +x 00:17:54.819 11:41:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.819 11:41:25 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:17:54.819 11:41:25 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.819 11:41:25 -- common/autotest_common.sh@10 -- # set +x 00:17:54.819 11:41:25 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.819 11:41:25 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:17:55.757 11:41:26 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:17:55.757 11:41:26 -- common/autotest_common.sh@1231 -- # local i=0 00:17:55.757 11:41:26 -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:17:55.757 11:41:26 -- common/autotest_common.sh@1232 -- # grep -q -w nvme2n1 00:17:55.757 11:41:26 -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:55.757 11:41:26 -- common/autotest_common.sh@1238 -- # grep -q -w nvme2n1 00:17:55.757 11:41:26 -- common/autotest_common.sh@1242 -- # return 0 00:17:55.757 11:41:26 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:55.757 11:41:26 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:17:55.757 11:41:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.757 11:41:26 -- common/autotest_common.sh@10 -- # set +x 00:17:55.757 11:41:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.757 11:41:26 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:55.757 11:41:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.757 11:41:26 -- common/autotest_common.sh@10 -- # set +x 00:17:55.757 Malloc3 00:17:55.757 11:41:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.757 11:41:26 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:55.757 11:41:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.757 11:41:26 -- common/autotest_common.sh@10 -- # set +x 00:17:55.757 11:41:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.757 11:41:26 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:17:55.757 11:41:26 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.757 11:41:26 -- common/autotest_common.sh@10 -- # set +x 00:17:55.757 11:41:26 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.757 11:41:26 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:17:57.136 11:41:27 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:17:57.136 11:41:27 -- common/autotest_common.sh@1231 -- # local i=0 00:17:57.136 11:41:27 -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:17:57.136 11:41:27 -- common/autotest_common.sh@1232 -- # grep -q -w nvme3n1 00:17:57.136 11:41:27 -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:57.136 11:41:27 -- common/autotest_common.sh@1238 -- # grep -q -w nvme3n1 00:17:57.136 11:41:27 -- common/autotest_common.sh@1242 -- # return 0 00:17:57.136 11:41:27 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:57.136 11:41:27 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:17:57.136 11:41:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.136 11:41:27 -- common/autotest_common.sh@10 -- # set +x 00:17:57.136 11:41:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.136 11:41:27 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:57.136 11:41:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.136 11:41:27 -- common/autotest_common.sh@10 -- # set +x 00:17:57.136 Malloc4 00:17:57.136 11:41:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.136 11:41:27 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:57.136 11:41:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.136 11:41:27 -- common/autotest_common.sh@10 -- # set +x 00:17:57.136 11:41:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.136 11:41:27 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:17:57.136 11:41:27 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.136 11:41:27 -- common/autotest_common.sh@10 -- # set +x 00:17:57.136 11:41:27 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.136 11:41:27 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:17:58.075 11:41:28 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:17:58.075 11:41:28 -- common/autotest_common.sh@1231 -- # local i=0 00:17:58.075 11:41:28 -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:17:58.075 11:41:28 -- common/autotest_common.sh@1232 -- # grep -q -w nvme4n1 00:17:58.075 11:41:28 -- common/autotest_common.sh@1238 -- # grep -q -w nvme4n1 00:17:58.075 11:41:28 -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:58.075 11:41:28 -- common/autotest_common.sh@1242 -- # return 0 00:17:58.075 11:41:28 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:58.075 11:41:28 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:17:58.075 11:41:28 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.075 11:41:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.075 11:41:28 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.075 11:41:28 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:58.075 11:41:28 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.075 11:41:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.075 Malloc5 00:17:58.075 11:41:28 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.075 11:41:28 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:58.075 11:41:28 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.075 11:41:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.075 11:41:28 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.075 11:41:28 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:17:58.075 11:41:28 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.075 11:41:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.075 11:41:28 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.075 11:41:28 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:17:59.012 11:41:29 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:17:59.012 11:41:29 -- common/autotest_common.sh@1231 -- # local i=0 00:17:59.012 11:41:29 -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:17:59.012 11:41:29 -- common/autotest_common.sh@1232 -- # grep -q -w nvme5n1 00:17:59.012 11:41:29 -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:59.012 11:41:29 -- common/autotest_common.sh@1238 -- # grep -q -w nvme5n1 00:17:59.012 11:41:29 -- common/autotest_common.sh@1242 -- # return 0 00:17:59.012 11:41:29 -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:17:59.012 [global] 00:17:59.012 thread=1 00:17:59.012 invalidate=1 00:17:59.012 rw=read 00:17:59.012 time_based=1 00:17:59.012 runtime=10 00:17:59.012 ioengine=libaio 00:17:59.012 direct=1 00:17:59.012 bs=1048576 00:17:59.012 iodepth=128 00:17:59.012 norandommap=1 00:17:59.012 numjobs=13 00:17:59.012 00:17:59.012 [job0] 00:17:59.012 filename=/dev/nvme0n1 00:17:59.012 [job1] 00:17:59.012 filename=/dev/nvme1n1 00:17:59.012 [job2] 00:17:59.012 filename=/dev/nvme2n1 00:17:59.012 [job3] 00:17:59.012 filename=/dev/nvme3n1 00:17:59.012 [job4] 00:17:59.012 filename=/dev/nvme4n1 00:17:59.012 [job5] 00:17:59.012 filename=/dev/nvme5n1 00:17:59.012 Could not set queue depth (nvme0n1) 00:17:59.012 Could not set queue depth (nvme1n1) 00:17:59.012 Could not set queue depth (nvme2n1) 00:17:59.012 Could not set queue depth (nvme3n1) 00:17:59.012 Could not set queue depth (nvme4n1) 00:17:59.012 Could not set queue depth (nvme5n1) 00:17:59.271 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:59.271 ... 00:17:59.271 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:59.271 ... 00:17:59.271 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:59.271 ... 00:17:59.271 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:59.271 ... 00:17:59.271 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:59.271 ... 00:17:59.271 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:59.271 ... 00:17:59.271 fio-3.35 00:17:59.271 Starting 78 threads 00:18:14.158 00:18:14.158 job0: (groupid=0, jobs=1): err= 0: pid=3052034: Wed May 15 11:41:44 2024 00:18:14.158 read: IOPS=3, BW=3732KiB/s (3822kB/s)(47.0MiB/12895msec) 00:18:14.158 slat (usec): min=522, max=2106.0k, avg=229111.89, stdev=645254.42 00:18:14.158 clat (msec): min=2125, max=12870, avg=9752.34, stdev=3326.27 00:18:14.158 lat (msec): min=4177, max=12893, avg=9981.45, stdev=3156.02 00:18:14.158 clat percentiles (msec): 00:18:14.158 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:18:14.158 | 30.00th=[ 8490], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12684], 00:18:14.158 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:18:14.158 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:18:14.158 | 99.99th=[12818] 00:18:14.158 lat (msec) : >=2000=100.00% 00:18:14.158 cpu : usr=0.01%, sys=0.33%, ctx=50, majf=0, minf=12033 00:18:14.158 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:18:14.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.158 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:14.158 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.158 job0: (groupid=0, jobs=1): err= 0: pid=3052035: Wed May 15 11:41:44 2024 00:18:14.158 read: IOPS=4, BW=4246KiB/s (4348kB/s)(54.0MiB/13024msec) 00:18:14.158 slat (usec): min=866, max=3156.9k, avg=201820.86, stdev=641640.23 00:18:14.158 clat (msec): min=2124, max=13022, avg=11531.68, stdev=2982.30 00:18:14.158 lat (msec): min=4184, max=13023, avg=11733.50, stdev=2687.93 00:18:14.158 clat percentiles (msec): 00:18:14.158 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[11745], 00:18:14.158 | 30.00th=[12818], 40.00th=[12953], 50.00th=[12953], 60.00th=[12953], 00:18:14.158 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:18:14.158 | 99.00th=[13087], 99.50th=[13087], 99.90th=[13087], 99.95th=[13087], 00:18:14.158 | 99.99th=[13087] 00:18:14.158 lat (msec) : >=2000=100.00% 00:18:14.158 cpu : usr=0.00%, sys=0.47%, ctx=101, majf=0, minf=13825 00:18:14.158 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:18:14.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.158 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:14.158 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.158 job0: (groupid=0, jobs=1): err= 0: pid=3052036: Wed May 15 11:41:44 2024 00:18:14.158 read: IOPS=4, BW=4659KiB/s (4771kB/s)(59.0MiB/12967msec) 00:18:14.158 slat (usec): min=751, max=2109.0k, avg=183790.82, stdev=584060.48 00:18:14.158 clat (msec): min=2122, max=12963, avg=10480.50, stdev=3267.95 00:18:14.158 lat (msec): min=4180, max=12966, avg=10664.29, stdev=3089.89 00:18:14.158 clat percentiles (msec): 00:18:14.158 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6409], 00:18:14.158 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12818], 60.00th=[12818], 00:18:14.158 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:18:14.158 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:18:14.158 | 99.99th=[12953] 00:18:14.158 lat (msec) : >=2000=100.00% 00:18:14.158 cpu : usr=0.00%, sys=0.46%, ctx=74, majf=0, minf=15105 00:18:14.158 IO depths : 1=1.7%, 2=3.4%, 4=6.8%, 8=13.6%, 16=27.1%, 32=47.5%, >=64=0.0% 00:18:14.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.158 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:14.158 issued rwts: total=59,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.158 job0: (groupid=0, jobs=1): err= 0: pid=3052037: Wed May 15 11:41:44 2024 00:18:14.158 read: IOPS=4, BW=4851KiB/s (4968kB/s)(61.0MiB/12876msec) 00:18:14.158 slat (usec): min=685, max=2080.1k, avg=176207.68, stdev=566106.13 00:18:14.158 clat (msec): min=2126, max=12873, avg=9328.00, stdev=3485.94 00:18:14.158 lat (msec): min=4176, max=12875, avg=9504.21, stdev=3386.11 00:18:14.158 clat percentiles (msec): 00:18:14.158 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 6342], 00:18:14.158 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12684], 00:18:14.158 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:18:14.158 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:18:14.158 | 99.99th=[12818] 00:18:14.158 lat (msec) : >=2000=100.00% 00:18:14.158 cpu : usr=0.00%, sys=0.45%, ctx=59, majf=0, minf=15617 00:18:14.158 IO depths : 1=1.6%, 2=3.3%, 4=6.6%, 8=13.1%, 16=26.2%, 32=49.2%, >=64=0.0% 00:18:14.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.158 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:14.158 issued rwts: total=61,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.158 job0: (groupid=0, jobs=1): err= 0: pid=3052038: Wed May 15 11:41:44 2024 00:18:14.158 read: IOPS=33, BW=33.5MiB/s (35.2MB/s)(472MiB/14077msec) 00:18:14.158 slat (usec): min=93, max=2107.2k, avg=25282.74, stdev=178176.16 00:18:14.158 clat (msec): min=739, max=10418, avg=3484.32, stdev=3851.60 00:18:14.158 lat (msec): min=742, max=10428, avg=3509.60, stdev=3860.12 00:18:14.158 clat percentiles (msec): 00:18:14.158 | 1.00th=[ 743], 5.00th=[ 768], 10.00th=[ 793], 20.00th=[ 869], 00:18:14.158 | 30.00th=[ 936], 40.00th=[ 995], 50.00th=[ 1036], 60.00th=[ 1318], 00:18:14.158 | 70.00th=[ 2299], 80.00th=[ 9731], 90.00th=[10000], 95.00th=[10268], 00:18:14.158 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:18:14.158 | 99.99th=[10402] 00:18:14.159 bw ( KiB/s): min= 2048, max=176128, per=3.03%, avg=78507.11, stdev=74202.40, samples=9 00:18:14.159 iops : min= 2, max= 172, avg=76.67, stdev=72.46, samples=9 00:18:14.159 lat (msec) : 750=1.69%, 1000=38.77%, 2000=21.19%, >=2000=38.35% 00:18:14.159 cpu : usr=0.01%, sys=0.93%, ctx=827, majf=0, minf=32769 00:18:14.159 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.7% 00:18:14.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.159 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:18:14.159 issued rwts: total=472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.159 job0: (groupid=0, jobs=1): err= 0: pid=3052039: Wed May 15 11:41:44 2024 00:18:14.159 read: IOPS=1, BW=1979KiB/s (2026kB/s)(25.0MiB/12938msec) 00:18:14.159 slat (msec): min=6, max=3149, avg=432.23, stdev=899.80 00:18:14.159 clat (msec): min=2131, max=12929, avg=9743.67, stdev=3682.03 00:18:14.159 lat (msec): min=4183, max=12937, avg=10175.90, stdev=3372.40 00:18:14.159 clat percentiles (msec): 00:18:14.159 | 1.00th=[ 2140], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4245], 00:18:14.159 | 30.00th=[ 6409], 40.00th=[ 9597], 50.00th=[11745], 60.00th=[11745], 00:18:14.159 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12953], 95.00th=[12953], 00:18:14.159 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:18:14.159 | 99.99th=[12953] 00:18:14.159 lat (msec) : >=2000=100.00% 00:18:14.159 cpu : usr=0.00%, sys=0.18%, ctx=67, majf=0, minf=6401 00:18:14.159 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:18:14.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.159 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:18:14.159 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.159 job0: (groupid=0, jobs=1): err= 0: pid=3052040: Wed May 15 11:41:44 2024 00:18:14.159 read: IOPS=2, BW=2636KiB/s (2699kB/s)(36.0MiB/13984msec) 00:18:14.159 slat (usec): min=624, max=4245.0k, avg=328959.94, stdev=894356.40 00:18:14.159 clat (msec): min=2140, max=13937, avg=11906.93, stdev=3490.46 00:18:14.159 lat (msec): min=4183, max=13983, avg=12235.89, stdev=3077.38 00:18:14.159 clat percentiles (msec): 00:18:14.159 | 1.00th=[ 2140], 5.00th=[ 4178], 10.00th=[ 6342], 20.00th=[10671], 00:18:14.159 | 30.00th=[12818], 40.00th=[13624], 50.00th=[13758], 60.00th=[13758], 00:18:14.159 | 70.00th=[13758], 80.00th=[13758], 90.00th=[13892], 95.00th=[13892], 00:18:14.159 | 99.00th=[13892], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:18:14.159 | 99.99th=[13892] 00:18:14.159 lat (msec) : >=2000=100.00% 00:18:14.159 cpu : usr=0.01%, sys=0.21%, ctx=94, majf=0, minf=9217 00:18:14.159 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:18:14.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.159 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:14.159 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.159 job0: (groupid=0, jobs=1): err= 0: pid=3052041: Wed May 15 11:41:44 2024 00:18:14.159 read: IOPS=2, BW=2267KiB/s (2321kB/s)(24.0MiB/10842msec) 00:18:14.159 slat (usec): min=1989, max=2088.5k, avg=447730.11, stdev=846944.93 00:18:14.159 clat (msec): min=96, max=10825, avg=6386.72, stdev=3388.62 00:18:14.159 lat (msec): min=2098, max=10841, avg=6834.45, stdev=3227.38 00:18:14.159 clat percentiles (msec): 00:18:14.159 | 1.00th=[ 96], 5.00th=[ 2106], 10.00th=[ 2106], 20.00th=[ 2198], 00:18:14.159 | 30.00th=[ 4329], 40.00th=[ 4396], 50.00th=[ 6477], 60.00th=[ 6544], 00:18:14.159 | 70.00th=[ 8658], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:18:14.159 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:18:14.159 | 99.99th=[10805] 00:18:14.159 lat (msec) : 100=4.17%, >=2000=95.83% 00:18:14.159 cpu : usr=0.00%, sys=0.17%, ctx=60, majf=0, minf=6145 00:18:14.159 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:18:14.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.159 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:18:14.159 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.159 job0: (groupid=0, jobs=1): err= 0: pid=3052043: Wed May 15 11:41:44 2024 00:18:14.159 read: IOPS=30, BW=30.5MiB/s (32.0MB/s)(430MiB/14080msec) 00:18:14.159 slat (usec): min=484, max=2110.7k, avg=27767.54, stdev=188515.57 00:18:14.159 clat (msec): min=742, max=10743, avg=3854.81, stdev=4092.81 00:18:14.159 lat (msec): min=745, max=10745, avg=3882.58, stdev=4100.68 00:18:14.159 clat percentiles (msec): 00:18:14.159 | 1.00th=[ 751], 5.00th=[ 776], 10.00th=[ 793], 20.00th=[ 860], 00:18:14.159 | 30.00th=[ 961], 40.00th=[ 1003], 50.00th=[ 1183], 60.00th=[ 2089], 00:18:14.159 | 70.00th=[ 4463], 80.00th=[10134], 90.00th=[10537], 95.00th=[10671], 00:18:14.159 | 99.00th=[10671], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:18:14.159 | 99.99th=[10805] 00:18:14.159 bw ( KiB/s): min= 2048, max=178176, per=2.66%, avg=68948.33, stdev=68685.37, samples=9 00:18:14.159 iops : min= 2, max= 174, avg=67.22, stdev=67.19, samples=9 00:18:14.159 lat (msec) : 750=0.93%, 1000=37.44%, 2000=18.14%, >=2000=43.49% 00:18:14.159 cpu : usr=0.01%, sys=0.94%, ctx=799, majf=0, minf=32263 00:18:14.159 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.4%, >=64=85.3% 00:18:14.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.159 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:18:14.159 issued rwts: total=430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.159 job0: (groupid=0, jobs=1): err= 0: pid=3052044: Wed May 15 11:41:44 2024 00:18:14.159 read: IOPS=5, BW=5505KiB/s (5637kB/s)(70.0MiB/13021msec) 00:18:14.159 slat (usec): min=861, max=2125.2k, avg=155697.97, stdev=537738.97 00:18:14.159 clat (msec): min=2121, max=13019, avg=10908.46, stdev=3271.22 00:18:14.159 lat (msec): min=4184, max=13020, avg=11064.15, stdev=3101.93 00:18:14.159 clat percentiles (msec): 00:18:14.159 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6409], 00:18:14.159 | 30.00th=[10671], 40.00th=[12818], 50.00th=[12953], 60.00th=[12953], 00:18:14.159 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:18:14.159 | 99.00th=[13087], 99.50th=[13087], 99.90th=[13087], 99.95th=[13087], 00:18:14.159 | 99.99th=[13087] 00:18:14.159 lat (msec) : >=2000=100.00% 00:18:14.159 cpu : usr=0.00%, sys=0.52%, ctx=101, majf=0, minf=17921 00:18:14.159 IO depths : 1=1.4%, 2=2.9%, 4=5.7%, 8=11.4%, 16=22.9%, 32=45.7%, >=64=10.0% 00:18:14.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.159 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:18:14.159 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.159 job0: (groupid=0, jobs=1): err= 0: pid=3052045: Wed May 15 11:41:44 2024 00:18:14.159 read: IOPS=5, BW=5826KiB/s (5966kB/s)(80.0MiB/14060msec) 00:18:14.159 slat (usec): min=404, max=2134.9k, avg=148970.42, stdev=518072.78 00:18:14.159 clat (msec): min=2141, max=14058, avg=11267.69, stdev=3695.12 00:18:14.159 lat (msec): min=4180, max=14059, avg=11416.66, stdev=3560.33 00:18:14.159 clat percentiles (msec): 00:18:14.159 | 1.00th=[ 2140], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:18:14.159 | 30.00th=[10671], 40.00th=[12818], 50.00th=[12818], 60.00th=[13892], 00:18:14.159 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:18:14.159 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:18:14.159 | 99.99th=[14026] 00:18:14.159 lat (msec) : >=2000=100.00% 00:18:14.159 cpu : usr=0.00%, sys=0.48%, ctx=98, majf=0, minf=20481 00:18:14.159 IO depths : 1=1.2%, 2=2.5%, 4=5.0%, 8=10.0%, 16=20.0%, 32=40.0%, >=64=21.3% 00:18:14.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.159 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:18:14.159 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.159 job0: (groupid=0, jobs=1): err= 0: pid=3052046: Wed May 15 11:41:44 2024 00:18:14.159 read: IOPS=29, BW=29.6MiB/s (31.0MB/s)(324MiB/10957msec) 00:18:14.159 slat (usec): min=97, max=2120.5k, avg=33499.22, stdev=243331.01 00:18:14.159 clat (msec): min=101, max=6839, avg=2490.52, stdev=2249.91 00:18:14.159 lat (msec): min=253, max=8595, avg=2524.02, stdev=2279.54 00:18:14.159 clat percentiles (msec): 00:18:14.159 | 1.00th=[ 253], 5.00th=[ 253], 10.00th=[ 253], 20.00th=[ 255], 00:18:14.159 | 30.00th=[ 255], 40.00th=[ 351], 50.00th=[ 2232], 60.00th=[ 3809], 00:18:14.159 | 70.00th=[ 3876], 80.00th=[ 3943], 90.00th=[ 6812], 95.00th=[ 6812], 00:18:14.159 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:18:14.159 | 99.99th=[ 6812] 00:18:14.159 bw ( KiB/s): min=38912, max=182272, per=5.17%, avg=133802.67, stdev=82184.11, samples=3 00:18:14.159 iops : min= 38, max= 178, avg=130.67, stdev=80.26, samples=3 00:18:14.159 lat (msec) : 250=0.31%, 500=41.98%, 2000=5.86%, >=2000=51.85% 00:18:14.159 cpu : usr=0.04%, sys=1.17%, ctx=271, majf=0, minf=32769 00:18:14.159 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=4.9%, 32=9.9%, >=64=80.6% 00:18:14.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.159 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:18:14.159 issued rwts: total=324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.159 job0: (groupid=0, jobs=1): err= 0: pid=3052047: Wed May 15 11:41:44 2024 00:18:14.159 read: IOPS=3, BW=3702KiB/s (3790kB/s)(47.0MiB/13002msec) 00:18:14.159 slat (usec): min=934, max=2108.7k, avg=231471.93, stdev=613495.65 00:18:14.159 clat (msec): min=2121, max=12998, avg=11135.63, stdev=3162.24 00:18:14.159 lat (msec): min=4217, max=13000, avg=11367.10, stdev=2873.04 00:18:14.159 clat percentiles (msec): 00:18:14.159 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 4245], 20.00th=[ 8557], 00:18:14.159 | 30.00th=[12818], 40.00th=[12818], 50.00th=[12818], 60.00th=[12953], 00:18:14.159 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:18:14.159 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:18:14.159 | 99.99th=[12953] 00:18:14.159 lat (msec) : >=2000=100.00% 00:18:14.159 cpu : usr=0.01%, sys=0.38%, ctx=74, majf=0, minf=12033 00:18:14.159 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:18:14.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.159 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:14.159 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.159 job1: (groupid=0, jobs=1): err= 0: pid=3052048: Wed May 15 11:41:44 2024 00:18:14.159 read: IOPS=113, BW=113MiB/s (119MB/s)(1226MiB/10841msec) 00:18:14.160 slat (usec): min=54, max=2068.2k, avg=8737.97, stdev=101365.44 00:18:14.160 clat (msec): min=110, max=6683, avg=1080.88, stdev=1815.60 00:18:14.160 lat (msec): min=110, max=6685, avg=1089.62, stdev=1821.47 00:18:14.160 clat percentiles (msec): 00:18:14.160 | 1.00th=[ 127], 5.00th=[ 134], 10.00th=[ 186], 20.00th=[ 368], 00:18:14.160 | 30.00th=[ 397], 40.00th=[ 426], 50.00th=[ 472], 60.00th=[ 535], 00:18:14.160 | 70.00th=[ 575], 80.00th=[ 625], 90.00th=[ 4329], 95.00th=[ 6611], 00:18:14.160 | 99.00th=[ 6678], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 6678], 00:18:14.160 | 99.99th=[ 6678] 00:18:14.160 bw ( KiB/s): min=10240, max=516096, per=7.89%, avg=204427.64, stdev=155153.59, samples=11 00:18:14.160 iops : min= 10, max= 504, avg=199.64, stdev=151.52, samples=11 00:18:14.160 lat (msec) : 250=12.64%, 500=42.58%, 750=30.91%, 1000=1.96%, >=2000=11.91% 00:18:14.160 cpu : usr=0.04%, sys=1.96%, ctx=1041, majf=0, minf=32769 00:18:14.160 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.9% 00:18:14.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.160 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.160 issued rwts: total=1226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.160 job1: (groupid=0, jobs=1): err= 0: pid=3052049: Wed May 15 11:41:44 2024 00:18:14.160 read: IOPS=9, BW=9.81MiB/s (10.3MB/s)(137MiB/13969msec) 00:18:14.160 slat (usec): min=531, max=2124.7k, avg=86408.15, stdev=396807.88 00:18:14.160 clat (msec): min=2129, max=12768, avg=6505.10, stdev=1395.26 00:18:14.160 lat (msec): min=4180, max=12775, avg=6591.51, stdev=1460.88 00:18:14.160 clat percentiles (msec): 00:18:14.160 | 1.00th=[ 4178], 5.00th=[ 4212], 10.00th=[ 6141], 20.00th=[ 6141], 00:18:14.160 | 30.00th=[ 6208], 40.00th=[ 6208], 50.00th=[ 6275], 60.00th=[ 6275], 00:18:14.160 | 70.00th=[ 6342], 80.00th=[ 6342], 90.00th=[ 8658], 95.00th=[10671], 00:18:14.160 | 99.00th=[10671], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:18:14.160 | 99.99th=[12818] 00:18:14.160 bw ( KiB/s): min= 2052, max=13473, per=0.30%, avg=7762.50, stdev=8075.87, samples=2 00:18:14.160 iops : min= 2, max= 13, avg= 7.50, stdev= 7.78, samples=2 00:18:14.160 lat (msec) : >=2000=100.00% 00:18:14.160 cpu : usr=0.00%, sys=0.72%, ctx=112, majf=0, minf=32769 00:18:14.160 IO depths : 1=0.7%, 2=1.5%, 4=2.9%, 8=5.8%, 16=11.7%, 32=23.4%, >=64=54.0% 00:18:14.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.160 complete : 0=0.0%, 4=90.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=9.1% 00:18:14.160 issued rwts: total=137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.160 job1: (groupid=0, jobs=1): err= 0: pid=3052050: Wed May 15 11:41:44 2024 00:18:14.160 read: IOPS=2, BW=2785KiB/s (2852kB/s)(35.0MiB/12870msec) 00:18:14.160 slat (usec): min=706, max=2101.2k, avg=307661.10, stdev=737988.46 00:18:14.160 clat (msec): min=2101, max=12866, avg=8676.17, stdev=3343.96 00:18:14.160 lat (msec): min=4170, max=12869, avg=8983.83, stdev=3214.11 00:18:14.160 clat percentiles (msec): 00:18:14.160 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 4212], 00:18:14.160 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[10671], 00:18:14.160 | 70.00th=[10671], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:18:14.160 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:18:14.160 | 99.99th=[12818] 00:18:14.160 lat (msec) : >=2000=100.00% 00:18:14.160 cpu : usr=0.00%, sys=0.26%, ctx=50, majf=0, minf=8961 00:18:14.160 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:18:14.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.160 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:14.160 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.160 job1: (groupid=0, jobs=1): err= 0: pid=3052051: Wed May 15 11:41:44 2024 00:18:14.160 read: IOPS=2, BW=2069KiB/s (2118kB/s)(26.0MiB/12871msec) 00:18:14.160 slat (usec): min=1039, max=2089.8k, avg=414231.04, stdev=825284.21 00:18:14.160 clat (msec): min=2100, max=12867, avg=9077.19, stdev=3522.11 00:18:14.160 lat (msec): min=4166, max=12870, avg=9491.42, stdev=3294.80 00:18:14.160 clat percentiles (msec): 00:18:14.160 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 6275], 00:18:14.160 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:18:14.160 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:18:14.160 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:18:14.160 | 99.99th=[12818] 00:18:14.160 lat (msec) : >=2000=100.00% 00:18:14.160 cpu : usr=0.00%, sys=0.19%, ctx=63, majf=0, minf=6657 00:18:14.160 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:18:14.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.160 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:18:14.160 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.160 job1: (groupid=0, jobs=1): err= 0: pid=3052052: Wed May 15 11:41:44 2024 00:18:14.160 read: IOPS=14, BW=14.8MiB/s (15.5MB/s)(191MiB/12932msec) 00:18:14.160 slat (usec): min=61, max=2093.3k, avg=56759.83, stdev=317988.09 00:18:14.160 clat (msec): min=400, max=12806, avg=8395.65, stdev=5158.37 00:18:14.160 lat (msec): min=403, max=12914, avg=8452.41, stdev=5145.67 00:18:14.160 clat percentiles (msec): 00:18:14.160 | 1.00th=[ 401], 5.00th=[ 405], 10.00th=[ 468], 20.00th=[ 542], 00:18:14.160 | 30.00th=[ 4212], 40.00th=[ 8557], 50.00th=[12281], 60.00th=[12281], 00:18:14.160 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12550], 95.00th=[12550], 00:18:14.160 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:18:14.160 | 99.99th=[12818] 00:18:14.160 bw ( KiB/s): min= 2048, max=94208, per=0.84%, avg=21843.17, stdev=35607.30, samples=6 00:18:14.160 iops : min= 2, max= 92, avg=21.17, stdev=34.86, samples=6 00:18:14.160 lat (msec) : 500=15.71%, 750=8.38%, 2000=1.57%, >=2000=74.35% 00:18:14.160 cpu : usr=0.00%, sys=0.88%, ctx=124, majf=0, minf=32769 00:18:14.160 IO depths : 1=0.5%, 2=1.0%, 4=2.1%, 8=4.2%, 16=8.4%, 32=16.8%, >=64=67.0% 00:18:14.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.160 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.5% 00:18:14.160 issued rwts: total=191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.160 job1: (groupid=0, jobs=1): err= 0: pid=3052053: Wed May 15 11:41:44 2024 00:18:14.160 read: IOPS=102, BW=103MiB/s (108MB/s)(1326MiB/12930msec) 00:18:14.160 slat (usec): min=41, max=2098.9k, avg=8162.77, stdev=79156.75 00:18:14.160 clat (msec): min=216, max=9586, avg=1106.89, stdev=1801.28 00:18:14.160 lat (msec): min=217, max=9912, avg=1115.05, stdev=1808.71 00:18:14.160 clat percentiles (msec): 00:18:14.160 | 1.00th=[ 218], 5.00th=[ 220], 10.00th=[ 222], 20.00th=[ 241], 00:18:14.160 | 30.00th=[ 249], 40.00th=[ 447], 50.00th=[ 584], 60.00th=[ 693], 00:18:14.160 | 70.00th=[ 735], 80.00th=[ 802], 90.00th=[ 2467], 95.00th=[ 6544], 00:18:14.160 | 99.00th=[ 6678], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 9597], 00:18:14.160 | 99.99th=[ 9597] 00:18:14.160 bw ( KiB/s): min= 2048, max=557056, per=7.29%, avg=188925.54, stdev=182745.00, samples=13 00:18:14.160 iops : min= 2, max= 544, avg=184.46, stdev=178.45, samples=13 00:18:14.160 lat (msec) : 250=31.60%, 500=11.31%, 750=30.69%, 1000=13.88%, 2000=1.81% 00:18:14.160 lat (msec) : >=2000=10.71% 00:18:14.160 cpu : usr=0.06%, sys=1.52%, ctx=1624, majf=0, minf=32769 00:18:14.160 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:18:14.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.160 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.160 issued rwts: total=1326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.160 job1: (groupid=0, jobs=1): err= 0: pid=3052054: Wed May 15 11:41:44 2024 00:18:14.160 read: IOPS=21, BW=22.0MiB/s (23.1MB/s)(285MiB/12959msec) 00:18:14.160 slat (usec): min=72, max=2117.6k, avg=38114.35, stdev=259807.14 00:18:14.160 clat (msec): min=323, max=12308, avg=5639.60, stdev=5653.02 00:18:14.160 lat (msec): min=325, max=12311, avg=5677.71, stdev=5660.90 00:18:14.160 clat percentiles (msec): 00:18:14.160 | 1.00th=[ 326], 5.00th=[ 342], 10.00th=[ 359], 20.00th=[ 393], 00:18:14.160 | 30.00th=[ 409], 40.00th=[ 451], 50.00th=[ 567], 60.00th=[12013], 00:18:14.160 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12281], 95.00th=[12281], 00:18:14.160 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:18:14.160 | 99.99th=[12281] 00:18:14.160 bw ( KiB/s): min= 2048, max=298411, per=2.08%, avg=53829.00, stdev=119831.39, samples=6 00:18:14.160 iops : min= 2, max= 291, avg=52.33, stdev=116.93, samples=6 00:18:14.160 lat (msec) : 500=42.46%, 750=8.77%, 2000=1.05%, >=2000=47.72% 00:18:14.160 cpu : usr=0.01%, sys=0.87%, ctx=262, majf=0, minf=32769 00:18:14.160 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.6%, 32=11.2%, >=64=77.9% 00:18:14.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.160 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:18:14.160 issued rwts: total=285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.160 job1: (groupid=0, jobs=1): err= 0: pid=3052055: Wed May 15 11:41:44 2024 00:18:14.160 read: IOPS=26, BW=27.0MiB/s (28.3MB/s)(376MiB/13942msec) 00:18:14.160 slat (usec): min=495, max=2091.9k, avg=31415.38, stdev=210985.80 00:18:14.160 clat (msec): min=911, max=9609, avg=3815.89, stdev=3594.77 00:18:14.160 lat (msec): min=932, max=9630, avg=3847.31, stdev=3599.80 00:18:14.160 clat percentiles (msec): 00:18:14.160 | 1.00th=[ 927], 5.00th=[ 953], 10.00th=[ 978], 20.00th=[ 1045], 00:18:14.160 | 30.00th=[ 1116], 40.00th=[ 1150], 50.00th=[ 1234], 60.00th=[ 1385], 00:18:14.160 | 70.00th=[ 7483], 80.00th=[ 8926], 90.00th=[ 9194], 95.00th=[ 9463], 00:18:14.160 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:18:14.160 | 99.99th=[ 9597] 00:18:14.160 bw ( KiB/s): min= 2052, max=141312, per=2.44%, avg=63316.25, stdev=58481.69, samples=8 00:18:14.160 iops : min= 2, max= 138, avg=61.75, stdev=57.20, samples=8 00:18:14.160 lat (msec) : 1000=14.36%, 2000=46.28%, >=2000=39.36% 00:18:14.160 cpu : usr=0.03%, sys=0.99%, ctx=648, majf=0, minf=32769 00:18:14.160 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.5%, >=64=83.2% 00:18:14.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.160 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:18:14.160 issued rwts: total=376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.160 job1: (groupid=0, jobs=1): err= 0: pid=3052056: Wed May 15 11:41:44 2024 00:18:14.160 read: IOPS=2, BW=2304KiB/s (2359kB/s)(29.0MiB/12888msec) 00:18:14.160 slat (usec): min=864, max=2113.6k, avg=371916.48, stdev=802722.16 00:18:14.160 clat (msec): min=2101, max=12885, avg=10316.86, stdev=3473.92 00:18:14.161 lat (msec): min=4182, max=12887, avg=10688.78, stdev=3122.57 00:18:14.161 clat percentiles (msec): 00:18:14.161 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:18:14.161 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12818], 60.00th=[12818], 00:18:14.161 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:18:14.161 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:18:14.161 | 99.99th=[12953] 00:18:14.161 lat (msec) : >=2000=100.00% 00:18:14.161 cpu : usr=0.00%, sys=0.20%, ctx=43, majf=0, minf=7425 00:18:14.161 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:18:14.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.161 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:18:14.161 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.161 job1: (groupid=0, jobs=1): err= 0: pid=3052057: Wed May 15 11:41:44 2024 00:18:14.161 read: IOPS=40, BW=40.3MiB/s (42.3MB/s)(520MiB/12903msec) 00:18:14.161 slat (usec): min=109, max=2103.0k, avg=20762.78, stdev=157804.55 00:18:14.161 clat (msec): min=361, max=10012, avg=2765.19, stdev=2809.77 00:18:14.161 lat (msec): min=363, max=10037, avg=2785.95, stdev=2822.24 00:18:14.161 clat percentiles (msec): 00:18:14.161 | 1.00th=[ 363], 5.00th=[ 372], 10.00th=[ 384], 20.00th=[ 414], 00:18:14.161 | 30.00th=[ 447], 40.00th=[ 1028], 50.00th=[ 1989], 60.00th=[ 2165], 00:18:14.161 | 70.00th=[ 2601], 80.00th=[ 7483], 90.00th=[ 7684], 95.00th=[ 7819], 00:18:14.161 | 99.00th=[ 7886], 99.50th=[ 7886], 99.90th=[10000], 99.95th=[10000], 00:18:14.161 | 99.99th=[10000] 00:18:14.161 bw ( KiB/s): min= 2048, max=335872, per=3.45%, avg=89429.33, stdev=120148.85, samples=9 00:18:14.161 iops : min= 2, max= 328, avg=87.33, stdev=117.33, samples=9 00:18:14.161 lat (msec) : 500=32.88%, 1000=6.15%, 2000=12.12%, >=2000=48.85% 00:18:14.161 cpu : usr=0.02%, sys=1.05%, ctx=835, majf=0, minf=32769 00:18:14.161 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.2%, >=64=87.9% 00:18:14.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.161 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:18:14.161 issued rwts: total=520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.161 job1: (groupid=0, jobs=1): err= 0: pid=3052058: Wed May 15 11:41:44 2024 00:18:14.161 read: IOPS=24, BW=24.2MiB/s (25.3MB/s)(337MiB/13945msec) 00:18:14.161 slat (usec): min=579, max=2097.1k, avg=35052.39, stdev=222886.50 00:18:14.161 clat (msec): min=898, max=9627, avg=4183.72, stdev=3705.08 00:18:14.161 lat (msec): min=912, max=9628, avg=4218.77, stdev=3707.58 00:18:14.161 clat percentiles (msec): 00:18:14.161 | 1.00th=[ 911], 5.00th=[ 927], 10.00th=[ 944], 20.00th=[ 1011], 00:18:14.161 | 30.00th=[ 1083], 40.00th=[ 1150], 50.00th=[ 1469], 60.00th=[ 3809], 00:18:14.161 | 70.00th=[ 8658], 80.00th=[ 8926], 90.00th=[ 9329], 95.00th=[ 9463], 00:18:14.161 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:18:14.161 | 99.99th=[ 9597] 00:18:14.161 bw ( KiB/s): min= 2052, max=131072, per=2.36%, avg=61155.43, stdev=53466.67, samples=7 00:18:14.161 iops : min= 2, max= 128, avg=59.71, stdev=52.22, samples=7 00:18:14.161 lat (msec) : 1000=18.99%, 2000=36.80%, >=2000=44.21% 00:18:14.161 cpu : usr=0.00%, sys=0.95%, ctx=651, majf=0, minf=32769 00:18:14.161 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.7%, 32=9.5%, >=64=81.3% 00:18:14.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.161 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:18:14.161 issued rwts: total=337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.161 job1: (groupid=0, jobs=1): err= 0: pid=3052059: Wed May 15 11:41:44 2024 00:18:14.161 read: IOPS=3, BW=3891KiB/s (3985kB/s)(49.0MiB/12894msec) 00:18:14.161 slat (usec): min=917, max=2088.3k, avg=220367.16, stdev=629597.67 00:18:14.161 clat (msec): min=2095, max=12891, avg=10683.70, stdev=3304.12 00:18:14.161 lat (msec): min=4154, max=12893, avg=10904.07, stdev=3071.24 00:18:14.161 clat percentiles (msec): 00:18:14.161 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6409], 00:18:14.161 | 30.00th=[10671], 40.00th=[12684], 50.00th=[12818], 60.00th=[12818], 00:18:14.161 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12953], 95.00th=[12953], 00:18:14.161 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:18:14.161 | 99.99th=[12953] 00:18:14.161 lat (msec) : >=2000=100.00% 00:18:14.161 cpu : usr=0.00%, sys=0.28%, ctx=80, majf=0, minf=12545 00:18:14.161 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:18:14.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.161 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:14.161 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.161 job1: (groupid=0, jobs=1): err= 0: pid=3052060: Wed May 15 11:41:44 2024 00:18:14.161 read: IOPS=2, BW=2225KiB/s (2279kB/s)(28.0MiB/12885msec) 00:18:14.161 slat (usec): min=708, max=2105.1k, avg=385305.31, stdev=807723.19 00:18:14.161 clat (msec): min=2095, max=12799, avg=9648.20, stdev=3656.06 00:18:14.161 lat (msec): min=4186, max=12883, avg=10033.50, stdev=3389.37 00:18:14.161 clat percentiles (msec): 00:18:14.161 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:18:14.161 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12684], 00:18:14.161 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:18:14.161 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:18:14.161 | 99.99th=[12818] 00:18:14.161 lat (msec) : >=2000=100.00% 00:18:14.161 cpu : usr=0.01%, sys=0.18%, ctx=52, majf=0, minf=7169 00:18:14.161 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:18:14.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.161 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:18:14.161 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.161 job2: (groupid=0, jobs=1): err= 0: pid=3052061: Wed May 15 11:41:44 2024 00:18:14.161 read: IOPS=43, BW=43.5MiB/s (45.6MB/s)(474MiB/10907msec) 00:18:14.161 slat (usec): min=90, max=2182.9k, avg=22803.44, stdev=191952.41 00:18:14.161 clat (msec): min=95, max=9013, avg=2724.37, stdev=3611.79 00:18:14.161 lat (msec): min=254, max=9015, avg=2747.17, stdev=3619.09 00:18:14.161 clat percentiles (msec): 00:18:14.161 | 1.00th=[ 253], 5.00th=[ 268], 10.00th=[ 300], 20.00th=[ 321], 00:18:14.161 | 30.00th=[ 326], 40.00th=[ 405], 50.00th=[ 527], 60.00th=[ 802], 00:18:14.161 | 70.00th=[ 1183], 80.00th=[ 8792], 90.00th=[ 8926], 95.00th=[ 8926], 00:18:14.161 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:18:14.161 | 99.99th=[ 9060] 00:18:14.161 bw ( KiB/s): min= 6144, max=397312, per=4.56%, avg=118005.67, stdev=152746.48, samples=6 00:18:14.161 iops : min= 6, max= 388, avg=115.00, stdev=149.12, samples=6 00:18:14.161 lat (msec) : 100=0.21%, 500=48.31%, 750=9.70%, 1000=5.70%, 2000=6.75% 00:18:14.161 lat (msec) : >=2000=29.32% 00:18:14.161 cpu : usr=0.00%, sys=1.05%, ctx=897, majf=0, minf=32769 00:18:14.161 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.7% 00:18:14.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.161 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:18:14.161 issued rwts: total=474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.161 job2: (groupid=0, jobs=1): err= 0: pid=3052062: Wed May 15 11:41:44 2024 00:18:14.161 read: IOPS=5, BW=5799KiB/s (5938kB/s)(73.0MiB/12890msec) 00:18:14.161 slat (usec): min=492, max=2146.2k, avg=147786.66, stdev=505080.98 00:18:14.161 clat (msec): min=2101, max=12880, avg=11508.19, stdev=2361.11 00:18:14.161 lat (msec): min=4190, max=12889, avg=11655.97, stdev=2085.70 00:18:14.161 clat percentiles (msec): 00:18:14.161 | 1.00th=[ 2106], 5.00th=[ 6275], 10.00th=[ 8490], 20.00th=[12147], 00:18:14.161 | 30.00th=[12281], 40.00th=[12281], 50.00th=[12416], 60.00th=[12550], 00:18:14.161 | 70.00th=[12550], 80.00th=[12684], 90.00th=[12684], 95.00th=[12818], 00:18:14.161 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:18:14.161 | 99.99th=[12818] 00:18:14.161 lat (msec) : >=2000=100.00% 00:18:14.161 cpu : usr=0.00%, sys=0.39%, ctx=188, majf=0, minf=18689 00:18:14.161 IO depths : 1=1.4%, 2=2.7%, 4=5.5%, 8=11.0%, 16=21.9%, 32=43.8%, >=64=13.7% 00:18:14.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.161 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:18:14.161 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.161 job2: (groupid=0, jobs=1): err= 0: pid=3052063: Wed May 15 11:41:44 2024 00:18:14.161 read: IOPS=85, BW=85.2MiB/s (89.4MB/s)(932MiB/10933msec) 00:18:14.161 slat (usec): min=44, max=2055.1k, avg=11625.66, stdev=120880.41 00:18:14.161 clat (msec): min=94, max=8715, avg=1427.58, stdev=1855.11 00:18:14.161 lat (msec): min=145, max=8717, avg=1439.20, stdev=1867.28 00:18:14.161 clat percentiles (msec): 00:18:14.161 | 1.00th=[ 144], 5.00th=[ 165], 10.00th=[ 205], 20.00th=[ 259], 00:18:14.161 | 30.00th=[ 321], 40.00th=[ 376], 50.00th=[ 430], 60.00th=[ 600], 00:18:14.161 | 70.00th=[ 818], 80.00th=[ 2769], 90.00th=[ 5403], 95.00th=[ 5671], 00:18:14.161 | 99.00th=[ 5873], 99.50th=[ 6477], 99.90th=[ 8658], 99.95th=[ 8658], 00:18:14.161 | 99.99th=[ 8658] 00:18:14.161 bw ( KiB/s): min= 6144, max=576641, per=6.36%, avg=164689.80, stdev=179275.61, samples=10 00:18:14.161 iops : min= 6, max= 563, avg=160.70, stdev=175.05, samples=10 00:18:14.161 lat (msec) : 100=0.11%, 250=15.67%, 500=36.80%, 750=17.17%, 1000=2.47% 00:18:14.161 lat (msec) : 2000=0.32%, >=2000=27.47% 00:18:14.161 cpu : usr=0.05%, sys=1.39%, ctx=1453, majf=0, minf=32769 00:18:14.161 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.2% 00:18:14.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.161 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.161 issued rwts: total=932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.161 job2: (groupid=0, jobs=1): err= 0: pid=3052064: Wed May 15 11:41:44 2024 00:18:14.161 read: IOPS=3, BW=3652KiB/s (3740kB/s)(46.0MiB/12898msec) 00:18:14.161 slat (usec): min=705, max=2217.3k, avg=234749.87, stdev=657851.36 00:18:14.161 clat (msec): min=2098, max=12896, avg=9834.82, stdev=3182.98 00:18:14.161 lat (msec): min=4166, max=12897, avg=10069.57, stdev=2992.23 00:18:14.161 clat percentiles (msec): 00:18:14.161 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:18:14.161 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:18:14.161 | 70.00th=[12818], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:18:14.161 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:18:14.162 | 99.99th=[12953] 00:18:14.162 lat (msec) : >=2000=100.00% 00:18:14.162 cpu : usr=0.00%, sys=0.31%, ctx=51, majf=0, minf=11777 00:18:14.162 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:18:14.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.162 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:14.162 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.162 job2: (groupid=0, jobs=1): err= 0: pid=3052065: Wed May 15 11:41:44 2024 00:18:14.162 read: IOPS=2, BW=2537KiB/s (2597kB/s)(32.0MiB/12918msec) 00:18:14.162 slat (usec): min=921, max=2175.0k, avg=338378.34, stdev=764723.47 00:18:14.162 clat (msec): min=2089, max=12872, avg=7841.46, stdev=2808.55 00:18:14.162 lat (msec): min=4173, max=12917, avg=8179.84, stdev=2744.74 00:18:14.162 clat percentiles (msec): 00:18:14.162 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:18:14.162 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[ 8490], 00:18:14.162 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[10671], 95.00th=[12818], 00:18:14.162 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:18:14.162 | 99.99th=[12818] 00:18:14.162 lat (msec) : >=2000=100.00% 00:18:14.162 cpu : usr=0.00%, sys=0.19%, ctx=60, majf=0, minf=8193 00:18:14.162 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:18:14.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.162 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:18:14.162 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.162 job2: (groupid=0, jobs=1): err= 0: pid=3052066: Wed May 15 11:41:44 2024 00:18:14.162 read: IOPS=22, BW=22.6MiB/s (23.7MB/s)(291MiB/12902msec) 00:18:14.162 slat (usec): min=58, max=2095.8k, avg=37178.60, stdev=247229.22 00:18:14.162 clat (msec): min=387, max=9198, avg=2478.85, stdev=2386.85 00:18:14.162 lat (msec): min=389, max=9199, avg=2516.03, stdev=2426.86 00:18:14.162 clat percentiles (msec): 00:18:14.162 | 1.00th=[ 388], 5.00th=[ 388], 10.00th=[ 393], 20.00th=[ 393], 00:18:14.162 | 30.00th=[ 447], 40.00th=[ 567], 50.00th=[ 3239], 60.00th=[ 3339], 00:18:14.162 | 70.00th=[ 3440], 80.00th=[ 3540], 90.00th=[ 3641], 95.00th=[ 9194], 00:18:14.162 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:18:14.162 | 99.99th=[ 9194] 00:18:14.162 bw ( KiB/s): min= 2048, max=225280, per=3.95%, avg=102224.33, stdev=113361.06, samples=3 00:18:14.162 iops : min= 2, max= 220, avg=99.67, stdev=110.75, samples=3 00:18:14.162 lat (msec) : 500=37.80%, 750=8.93%, 2000=0.34%, >=2000=52.92% 00:18:14.162 cpu : usr=0.01%, sys=0.94%, ctx=269, majf=0, minf=32769 00:18:14.162 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.5%, 32=11.0%, >=64=78.4% 00:18:14.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.162 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:18:14.162 issued rwts: total=291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.162 job2: (groupid=0, jobs=1): err= 0: pid=3052067: Wed May 15 11:41:44 2024 00:18:14.162 read: IOPS=105, BW=105MiB/s (110MB/s)(1472MiB/13992msec) 00:18:14.162 slat (usec): min=41, max=2084.1k, avg=8052.83, stdev=92946.50 00:18:14.162 clat (msec): min=212, max=6599, avg=1011.07, stdev=1662.58 00:18:14.162 lat (msec): min=213, max=6614, avg=1019.12, stdev=1668.60 00:18:14.162 clat percentiles (msec): 00:18:14.162 | 1.00th=[ 230], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 330], 00:18:14.162 | 30.00th=[ 368], 40.00th=[ 414], 50.00th=[ 447], 60.00th=[ 535], 00:18:14.162 | 70.00th=[ 609], 80.00th=[ 667], 90.00th=[ 2433], 95.00th=[ 6477], 00:18:14.162 | 99.00th=[ 6611], 99.50th=[ 6611], 99.90th=[ 6611], 99.95th=[ 6611], 00:18:14.162 | 99.99th=[ 6611] 00:18:14.162 bw ( KiB/s): min= 2052, max=474187, per=8.84%, avg=228857.17, stdev=142088.64, samples=12 00:18:14.162 iops : min= 2, max= 463, avg=223.42, stdev=138.76, samples=12 00:18:14.162 lat (msec) : 250=2.58%, 500=55.50%, 750=25.14%, 1000=6.11%, >=2000=10.67% 00:18:14.162 cpu : usr=0.04%, sys=1.39%, ctx=1330, majf=0, minf=32769 00:18:14.162 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:18:14.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.162 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.162 issued rwts: total=1472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.162 job2: (groupid=0, jobs=1): err= 0: pid=3052068: Wed May 15 11:41:44 2024 00:18:14.162 read: IOPS=12, BW=12.6MiB/s (13.2MB/s)(162MiB/12906msec) 00:18:14.162 slat (usec): min=120, max=2146.5k, avg=66765.77, stdev=334610.15 00:18:14.162 clat (msec): min=874, max=10599, avg=8274.07, stdev=2969.67 00:18:14.162 lat (msec): min=880, max=10601, avg=8340.84, stdev=2904.44 00:18:14.162 clat percentiles (msec): 00:18:14.162 | 1.00th=[ 877], 5.00th=[ 2232], 10.00th=[ 2970], 20.00th=[ 4329], 00:18:14.162 | 30.00th=[ 6544], 40.00th=[ 9866], 50.00th=[10000], 60.00th=[10134], 00:18:14.162 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10537], 95.00th=[10537], 00:18:14.162 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:18:14.162 | 99.99th=[10537] 00:18:14.162 bw ( KiB/s): min= 2048, max=24576, per=0.46%, avg=11943.83, stdev=9289.16, samples=6 00:18:14.162 iops : min= 2, max= 24, avg=11.50, stdev= 9.16, samples=6 00:18:14.162 lat (msec) : 1000=2.47%, >=2000=97.53% 00:18:14.162 cpu : usr=0.01%, sys=0.78%, ctx=148, majf=0, minf=32769 00:18:14.162 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=4.9%, 16=9.9%, 32=19.8%, >=64=61.1% 00:18:14.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.162 complete : 0=0.0%, 4=97.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.8% 00:18:14.162 issued rwts: total=162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.162 job2: (groupid=0, jobs=1): err= 0: pid=3052069: Wed May 15 11:41:44 2024 00:18:14.162 read: IOPS=6, BW=6210KiB/s (6359kB/s)(66.0MiB/10883msec) 00:18:14.162 slat (usec): min=874, max=2063.3k, avg=163427.23, stdev=545605.05 00:18:14.162 clat (msec): min=95, max=10881, avg=7479.32, stdev=3432.16 00:18:14.162 lat (msec): min=2138, max=10882, avg=7642.74, stdev=3330.46 00:18:14.162 clat percentiles (msec): 00:18:14.162 | 1.00th=[ 96], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:18:14.162 | 30.00th=[ 4396], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[10671], 00:18:14.162 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10939], 95.00th=[10939], 00:18:14.162 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:18:14.162 | 99.99th=[10939] 00:18:14.162 lat (msec) : 100=1.52%, >=2000=98.48% 00:18:14.162 cpu : usr=0.00%, sys=0.59%, ctx=76, majf=0, minf=16897 00:18:14.162 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.1%, 16=24.2%, 32=48.5%, >=64=4.5% 00:18:14.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.162 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:18:14.162 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.162 job2: (groupid=0, jobs=1): err= 0: pid=3052070: Wed May 15 11:41:44 2024 00:18:14.162 read: IOPS=5, BW=5298KiB/s (5425kB/s)(67.0MiB/12951msec) 00:18:14.162 slat (usec): min=702, max=2121.2k, avg=161853.86, stdev=538301.61 00:18:14.162 clat (msec): min=2106, max=12949, avg=11201.12, stdev=3098.63 00:18:14.162 lat (msec): min=4155, max=12950, avg=11362.98, stdev=2892.77 00:18:14.162 clat percentiles (msec): 00:18:14.162 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[10671], 00:18:14.162 | 30.00th=[12550], 40.00th=[12550], 50.00th=[12684], 60.00th=[12684], 00:18:14.162 | 70.00th=[12684], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:18:14.162 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:18:14.162 | 99.99th=[12953] 00:18:14.162 lat (msec) : >=2000=100.00% 00:18:14.162 cpu : usr=0.02%, sys=0.44%, ctx=111, majf=0, minf=17153 00:18:14.162 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=11.9%, 16=23.9%, 32=47.8%, >=64=6.0% 00:18:14.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.162 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:18:14.162 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.162 job2: (groupid=0, jobs=1): err= 0: pid=3052071: Wed May 15 11:41:44 2024 00:18:14.162 read: IOPS=4, BW=4109KiB/s (4207kB/s)(52.0MiB/12960msec) 00:18:14.162 slat (usec): min=946, max=2105.2k, avg=208816.55, stdev=613516.02 00:18:14.162 clat (msec): min=2100, max=12956, avg=10745.18, stdev=3455.77 00:18:14.162 lat (msec): min=4163, max=12959, avg=10954.00, stdev=3244.81 00:18:14.162 clat percentiles (msec): 00:18:14.162 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:18:14.162 | 30.00th=[10671], 40.00th=[12818], 50.00th=[12818], 60.00th=[12818], 00:18:14.162 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:18:14.162 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:18:14.162 | 99.99th=[12953] 00:18:14.162 lat (msec) : >=2000=100.00% 00:18:14.162 cpu : usr=0.00%, sys=0.40%, ctx=81, majf=0, minf=13313 00:18:14.162 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:18:14.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.162 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:14.162 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.162 job2: (groupid=0, jobs=1): err= 0: pid=3052072: Wed May 15 11:41:44 2024 00:18:14.162 read: IOPS=43, BW=43.9MiB/s (46.0MB/s)(567MiB/12929msec) 00:18:14.162 slat (usec): min=47, max=2129.9k, avg=19090.82, stdev=177277.24 00:18:14.162 clat (msec): min=129, max=11601, avg=2800.39, stdev=4469.90 00:18:14.162 lat (msec): min=129, max=11601, avg=2819.49, stdev=4483.84 00:18:14.162 clat percentiles (msec): 00:18:14.162 | 1.00th=[ 130], 5.00th=[ 130], 10.00th=[ 131], 20.00th=[ 133], 00:18:14.162 | 30.00th=[ 163], 40.00th=[ 222], 50.00th=[ 334], 60.00th=[ 485], 00:18:14.163 | 70.00th=[ 667], 80.00th=[ 8490], 90.00th=[11610], 95.00th=[11610], 00:18:14.163 | 99.00th=[11610], 99.50th=[11610], 99.90th=[11610], 99.95th=[11610], 00:18:14.163 | 99.99th=[11610] 00:18:14.163 bw ( KiB/s): min= 2048, max=704512, per=4.97%, avg=128731.43, stdev=259462.09, samples=7 00:18:14.163 iops : min= 2, max= 688, avg=125.71, stdev=253.38, samples=7 00:18:14.163 lat (msec) : 250=43.92%, 500=16.93%, 750=12.35%, 1000=0.71%, >=2000=26.10% 00:18:14.163 cpu : usr=0.02%, sys=0.92%, ctx=787, majf=0, minf=32769 00:18:14.163 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=88.9% 00:18:14.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.163 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:14.163 issued rwts: total=567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.163 job2: (groupid=0, jobs=1): err= 0: pid=3052073: Wed May 15 11:41:44 2024 00:18:14.163 read: IOPS=38, BW=38.8MiB/s (40.6MB/s)(504MiB/13005msec) 00:18:14.163 slat (usec): min=57, max=2099.0k, avg=21624.08, stdev=177412.63 00:18:14.163 clat (msec): min=517, max=10711, avg=3170.76, stdev=2474.44 00:18:14.163 lat (msec): min=519, max=10711, avg=3192.38, stdev=2483.43 00:18:14.163 clat percentiles (msec): 00:18:14.163 | 1.00th=[ 518], 5.00th=[ 531], 10.00th=[ 567], 20.00th=[ 760], 00:18:14.163 | 30.00th=[ 1020], 40.00th=[ 1167], 50.00th=[ 2601], 60.00th=[ 4665], 00:18:14.163 | 70.00th=[ 4732], 80.00th=[ 6074], 90.00th=[ 6275], 95.00th=[ 6409], 00:18:14.163 | 99.00th=[ 8557], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:18:14.163 | 99.99th=[10671] 00:18:14.163 bw ( KiB/s): min= 2048, max=223232, per=3.72%, avg=96457.75, stdev=94721.66, samples=8 00:18:14.163 iops : min= 2, max= 218, avg=94.12, stdev=92.40, samples=8 00:18:14.163 lat (msec) : 750=19.64%, 1000=9.92%, 2000=18.25%, >=2000=52.18% 00:18:14.163 cpu : usr=0.02%, sys=1.11%, ctx=628, majf=0, minf=32769 00:18:14.163 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.3%, >=64=87.5% 00:18:14.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.163 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:18:14.163 issued rwts: total=504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.163 job3: (groupid=0, jobs=1): err= 0: pid=3052074: Wed May 15 11:41:44 2024 00:18:14.163 read: IOPS=2, BW=2923KiB/s (2993kB/s)(31.0MiB/10861msec) 00:18:14.163 slat (usec): min=894, max=2188.6k, avg=346555.10, stdev=774885.60 00:18:14.163 clat (msec): min=116, max=10856, avg=5397.65, stdev=3332.74 00:18:14.163 lat (msec): min=2140, max=10860, avg=5744.21, stdev=3323.87 00:18:14.163 clat percentiles (msec): 00:18:14.163 | 1.00th=[ 117], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 2232], 00:18:14.163 | 30.00th=[ 2232], 40.00th=[ 4329], 50.00th=[ 4396], 60.00th=[ 4396], 00:18:14.163 | 70.00th=[ 6477], 80.00th=[ 8658], 90.00th=[10805], 95.00th=[10805], 00:18:14.163 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:18:14.163 | 99.99th=[10805] 00:18:14.163 lat (msec) : 250=3.23%, >=2000=96.77% 00:18:14.163 cpu : usr=0.00%, sys=0.23%, ctx=63, majf=0, minf=7937 00:18:14.163 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:18:14.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.163 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:18:14.163 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.163 job3: (groupid=0, jobs=1): err= 0: pid=3052075: Wed May 15 11:41:44 2024 00:18:14.163 read: IOPS=4, BW=4100KiB/s (4198kB/s)(52.0MiB/12988msec) 00:18:14.163 slat (usec): min=805, max=2096.8k, avg=209188.55, stdev=614610.42 00:18:14.163 clat (msec): min=2109, max=12986, avg=11057.17, stdev=3042.89 00:18:14.163 lat (msec): min=4161, max=12987, avg=11266.36, stdev=2778.09 00:18:14.163 clat percentiles (msec): 00:18:14.163 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 6342], 20.00th=[ 8490], 00:18:14.163 | 30.00th=[10671], 40.00th=[12818], 50.00th=[12953], 60.00th=[12953], 00:18:14.163 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:18:14.163 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:18:14.163 | 99.99th=[12953] 00:18:14.163 lat (msec) : >=2000=100.00% 00:18:14.163 cpu : usr=0.00%, sys=0.42%, ctx=92, majf=0, minf=13313 00:18:14.163 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:18:14.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.163 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:14.163 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.163 job3: (groupid=0, jobs=1): err= 0: pid=3052076: Wed May 15 11:41:44 2024 00:18:14.163 read: IOPS=4, BW=5002KiB/s (5122kB/s)(53.0MiB/10851msec) 00:18:14.163 slat (usec): min=896, max=2061.9k, avg=202262.96, stdev=595888.99 00:18:14.163 clat (msec): min=130, max=10837, avg=7880.61, stdev=3232.43 00:18:14.163 lat (msec): min=2138, max=10850, avg=8082.87, stdev=3069.44 00:18:14.163 clat percentiles (msec): 00:18:14.163 | 1.00th=[ 131], 5.00th=[ 2165], 10.00th=[ 2232], 20.00th=[ 4396], 00:18:14.163 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[ 8658], 60.00th=[10671], 00:18:14.163 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:18:14.163 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:18:14.163 | 99.99th=[10805] 00:18:14.163 lat (msec) : 250=1.89%, >=2000=98.11% 00:18:14.163 cpu : usr=0.00%, sys=0.46%, ctx=67, majf=0, minf=13569 00:18:14.163 IO depths : 1=1.9%, 2=3.8%, 4=7.5%, 8=15.1%, 16=30.2%, 32=41.5%, >=64=0.0% 00:18:14.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.163 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:14.163 issued rwts: total=53,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.163 job3: (groupid=0, jobs=1): err= 0: pid=3052077: Wed May 15 11:41:44 2024 00:18:14.163 read: IOPS=11, BW=11.8MiB/s (12.4MB/s)(129MiB/10941msec) 00:18:14.163 slat (usec): min=405, max=2064.8k, avg=83903.73, stdev=367367.74 00:18:14.163 clat (msec): min=116, max=10935, avg=9254.33, stdev=2035.42 00:18:14.163 lat (msec): min=2140, max=10936, avg=9338.24, stdev=1872.34 00:18:14.163 clat percentiles (msec): 00:18:14.163 | 1.00th=[ 2140], 5.00th=[ 4329], 10.00th=[ 6544], 20.00th=[ 9329], 00:18:14.163 | 30.00th=[ 9329], 40.00th=[ 9463], 50.00th=[ 9597], 60.00th=[ 9597], 00:18:14.163 | 70.00th=[ 9731], 80.00th=[10805], 90.00th=[10939], 95.00th=[10939], 00:18:14.163 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:18:14.163 | 99.99th=[10939] 00:18:14.163 bw ( KiB/s): min= 2048, max= 2048, per=0.08%, avg=2048.00, stdev= 0.00, samples=1 00:18:14.163 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 00:18:14.163 lat (msec) : 250=0.78%, >=2000=99.22% 00:18:14.163 cpu : usr=0.00%, sys=1.01%, ctx=150, majf=0, minf=32769 00:18:14.163 IO depths : 1=0.8%, 2=1.6%, 4=3.1%, 8=6.2%, 16=12.4%, 32=24.8%, >=64=51.2% 00:18:14.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.163 complete : 0=0.0%, 4=66.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=33.3% 00:18:14.163 issued rwts: total=129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.163 job3: (groupid=0, jobs=1): err= 0: pid=3052078: Wed May 15 11:41:44 2024 00:18:14.163 read: IOPS=4, BW=4891KiB/s (5008kB/s)(52.0MiB/10888msec) 00:18:14.163 slat (usec): min=899, max=2069.2k, avg=206863.36, stdev=605181.57 00:18:14.163 clat (msec): min=130, max=10885, avg=7833.89, stdev=3471.09 00:18:14.163 lat (msec): min=2138, max=10887, avg=8040.75, stdev=3320.27 00:18:14.163 clat percentiles (msec): 00:18:14.163 | 1.00th=[ 131], 5.00th=[ 2165], 10.00th=[ 2232], 20.00th=[ 4329], 00:18:14.163 | 30.00th=[ 6477], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[10805], 00:18:14.163 | 70.00th=[10805], 80.00th=[10939], 90.00th=[10939], 95.00th=[10939], 00:18:14.163 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:18:14.163 | 99.99th=[10939] 00:18:14.163 lat (msec) : 250=1.92%, >=2000=98.08% 00:18:14.163 cpu : usr=0.01%, sys=0.43%, ctx=74, majf=0, minf=13313 00:18:14.163 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:18:14.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.163 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:14.163 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.163 job3: (groupid=0, jobs=1): err= 0: pid=3052079: Wed May 15 11:41:44 2024 00:18:14.163 read: IOPS=2, BW=2072KiB/s (2121kB/s)(26.0MiB/12852msec) 00:18:14.163 slat (usec): min=496, max=2098.8k, avg=412652.47, stdev=823554.01 00:18:14.163 clat (msec): min=2122, max=12845, avg=9170.32, stdev=3678.48 00:18:14.163 lat (msec): min=4164, max=12851, avg=9582.97, stdev=3450.93 00:18:14.163 clat percentiles (msec): 00:18:14.163 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 4245], 00:18:14.163 | 30.00th=[ 6342], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[10671], 00:18:14.163 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:18:14.163 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:18:14.163 | 99.99th=[12818] 00:18:14.163 lat (msec) : >=2000=100.00% 00:18:14.163 cpu : usr=0.00%, sys=0.15%, ctx=64, majf=0, minf=6657 00:18:14.163 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:18:14.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.164 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:18:14.164 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.164 job3: (groupid=0, jobs=1): err= 0: pid=3052080: Wed May 15 11:41:44 2024 00:18:14.164 read: IOPS=2, BW=2612KiB/s (2675kB/s)(33.0MiB/12936msec) 00:18:14.164 slat (usec): min=905, max=2069.3k, avg=327871.85, stdev=742244.76 00:18:14.164 clat (msec): min=2115, max=12932, avg=9181.90, stdev=3633.06 00:18:14.164 lat (msec): min=4184, max=12935, avg=9509.77, stdev=3459.51 00:18:14.164 clat percentiles (msec): 00:18:14.164 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 4245], 00:18:14.164 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[10671], 00:18:14.164 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12953], 95.00th=[12953], 00:18:14.164 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:18:14.164 | 99.99th=[12953] 00:18:14.164 lat (msec) : >=2000=100.00% 00:18:14.164 cpu : usr=0.00%, sys=0.26%, ctx=74, majf=0, minf=8449 00:18:14.164 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:18:14.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.164 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:14.164 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.164 job3: (groupid=0, jobs=1): err= 0: pid=3052081: Wed May 15 11:41:44 2024 00:18:14.164 read: IOPS=5, BW=5931KiB/s (6073kB/s)(63.0MiB/10878msec) 00:18:14.164 slat (usec): min=641, max=2101.3k, avg=171061.16, stdev=558617.59 00:18:14.164 clat (msec): min=100, max=10866, avg=7657.49, stdev=2663.10 00:18:14.164 lat (msec): min=2142, max=10877, avg=7828.55, stdev=2511.66 00:18:14.164 clat percentiles (msec): 00:18:14.164 | 1.00th=[ 101], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 4396], 00:18:14.164 | 30.00th=[ 6477], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[ 8658], 00:18:14.164 | 70.00th=[ 8658], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:18:14.164 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:18:14.164 | 99.99th=[10805] 00:18:14.164 lat (msec) : 250=1.59%, >=2000=98.41% 00:18:14.164 cpu : usr=0.02%, sys=0.48%, ctx=68, majf=0, minf=16129 00:18:14.164 IO depths : 1=1.6%, 2=3.2%, 4=6.3%, 8=12.7%, 16=25.4%, 32=50.8%, >=64=0.0% 00:18:14.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.164 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:14.164 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.164 job3: (groupid=0, jobs=1): err= 0: pid=3052082: Wed May 15 11:41:44 2024 00:18:14.164 read: IOPS=7, BW=7936KiB/s (8126kB/s)(85.0MiB/10968msec) 00:18:14.164 slat (usec): min=545, max=2082.8k, avg=127842.16, stdev=482761.13 00:18:14.164 clat (msec): min=100, max=10965, avg=8472.53, stdev=3102.37 00:18:14.164 lat (msec): min=2142, max=10967, avg=8600.37, stdev=2974.52 00:18:14.164 clat percentiles (msec): 00:18:14.164 | 1.00th=[ 101], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 4396], 00:18:14.164 | 30.00th=[ 6544], 40.00th=[ 8658], 50.00th=[10805], 60.00th=[10805], 00:18:14.164 | 70.00th=[10939], 80.00th=[10939], 90.00th=[10939], 95.00th=[10939], 00:18:14.164 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:18:14.164 | 99.99th=[10939] 00:18:14.164 lat (msec) : 250=1.18%, >=2000=98.82% 00:18:14.164 cpu : usr=0.00%, sys=0.72%, ctx=107, majf=0, minf=21761 00:18:14.164 IO depths : 1=1.2%, 2=2.4%, 4=4.7%, 8=9.4%, 16=18.8%, 32=37.6%, >=64=25.9% 00:18:14.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.164 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:18:14.164 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.164 job3: (groupid=0, jobs=1): err= 0: pid=3052083: Wed May 15 11:41:44 2024 00:18:14.164 read: IOPS=3, BW=3866KiB/s (3958kB/s)(49.0MiB/12980msec) 00:18:14.164 slat (usec): min=990, max=2146.1k, avg=221826.50, stdev=637918.46 00:18:14.164 clat (msec): min=2109, max=12977, avg=11136.67, stdev=3085.75 00:18:14.164 lat (msec): min=4161, max=12979, avg=11358.50, stdev=2800.85 00:18:14.164 clat percentiles (msec): 00:18:14.164 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 6342], 20.00th=[ 8490], 00:18:14.164 | 30.00th=[10671], 40.00th=[12818], 50.00th=[12953], 60.00th=[12953], 00:18:14.164 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:18:14.164 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:18:14.164 | 99.99th=[12953] 00:18:14.164 lat (msec) : >=2000=100.00% 00:18:14.164 cpu : usr=0.00%, sys=0.42%, ctx=83, majf=0, minf=12545 00:18:14.164 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:18:14.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.164 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:14.164 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.164 job3: (groupid=0, jobs=1): err= 0: pid=3052084: Wed May 15 11:41:44 2024 00:18:14.164 read: IOPS=7, BW=7723KiB/s (7909kB/s)(82.0MiB/10872msec) 00:18:14.164 slat (usec): min=522, max=2049.1k, avg=131030.15, stdev=486135.64 00:18:14.164 clat (msec): min=126, max=10870, avg=7883.22, stdev=3279.12 00:18:14.164 lat (msec): min=2144, max=10871, avg=8014.25, stdev=3178.49 00:18:14.164 clat percentiles (msec): 00:18:14.164 | 1.00th=[ 128], 5.00th=[ 2165], 10.00th=[ 2232], 20.00th=[ 4329], 00:18:14.164 | 30.00th=[ 6477], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[10671], 00:18:14.164 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:18:14.164 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:18:14.164 | 99.99th=[10805] 00:18:14.164 lat (msec) : 250=1.22%, >=2000=98.78% 00:18:14.164 cpu : usr=0.01%, sys=0.60%, ctx=83, majf=0, minf=20993 00:18:14.164 IO depths : 1=1.2%, 2=2.4%, 4=4.9%, 8=9.8%, 16=19.5%, 32=39.0%, >=64=23.2% 00:18:14.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.164 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:18:14.164 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.164 job3: (groupid=0, jobs=1): err= 0: pid=3052085: Wed May 15 11:41:44 2024 00:18:14.164 read: IOPS=1, BW=1666KiB/s (1706kB/s)(21.0MiB/12910msec) 00:18:14.164 slat (msec): min=9, max=2188, avg=513.97, stdev=901.05 00:18:14.164 clat (msec): min=2115, max=12864, avg=7493.52, stdev=2771.35 00:18:14.164 lat (msec): min=4192, max=12908, avg=8007.49, stdev=2724.56 00:18:14.164 clat percentiles (msec): 00:18:14.164 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6275], 00:18:14.164 | 30.00th=[ 6342], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 8490], 00:18:14.164 | 70.00th=[ 8490], 80.00th=[ 8557], 90.00th=[10671], 95.00th=[12818], 00:18:14.164 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:18:14.164 | 99.99th=[12818] 00:18:14.164 lat (msec) : >=2000=100.00% 00:18:14.164 cpu : usr=0.00%, sys=0.12%, ctx=61, majf=0, minf=5377 00:18:14.164 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:18:14.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.164 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:18:14.164 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.164 job3: (groupid=0, jobs=1): err= 0: pid=3052086: Wed May 15 11:41:44 2024 00:18:14.164 read: IOPS=14, BW=15.0MiB/s (15.7MB/s)(150MiB/10030msec) 00:18:14.164 slat (usec): min=62, max=2118.5k, avg=66679.67, stdev=346408.92 00:18:14.164 clat (msec): min=26, max=9971, avg=4722.46, stdev=4478.03 00:18:14.164 lat (msec): min=41, max=9975, avg=4789.14, stdev=4482.10 00:18:14.164 clat percentiles (msec): 00:18:14.164 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 85], 00:18:14.164 | 30.00th=[ 112], 40.00th=[ 201], 50.00th=[ 2400], 60.00th=[ 8792], 00:18:14.164 | 70.00th=[ 9866], 80.00th=[10000], 90.00th=[10000], 95.00th=[10000], 00:18:14.164 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:18:14.164 | 99.99th=[10000] 00:18:14.164 bw ( KiB/s): min=47104, max=47104, per=1.82%, avg=47104.00, stdev= 0.00, samples=1 00:18:14.164 iops : min= 46, max= 46, avg=46.00, stdev= 0.00, samples=1 00:18:14.164 lat (msec) : 50=6.00%, 100=15.33%, 250=20.00%, >=2000=58.67% 00:18:14.164 cpu : usr=0.01%, sys=1.15%, ctx=137, majf=0, minf=32769 00:18:14.164 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=5.3%, 16=10.7%, 32=21.3%, >=64=58.0% 00:18:14.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.164 complete : 0=0.0%, 4=95.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.2% 00:18:14.164 issued rwts: total=150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.164 job4: (groupid=0, jobs=1): err= 0: pid=3052087: Wed May 15 11:41:44 2024 00:18:14.164 read: IOPS=137, BW=137MiB/s (144MB/s)(1500MiB/10923msec) 00:18:14.164 slat (usec): min=42, max=2066.7k, avg=7196.57, stdev=76589.29 00:18:14.164 clat (msec): min=96, max=5054, avg=894.74, stdev=1266.27 00:18:14.164 lat (msec): min=96, max=5055, avg=901.94, stdev=1270.03 00:18:14.164 clat percentiles (msec): 00:18:14.164 | 1.00th=[ 113], 5.00th=[ 114], 10.00th=[ 118], 20.00th=[ 120], 00:18:14.164 | 30.00th=[ 338], 40.00th=[ 481], 50.00th=[ 535], 60.00th=[ 634], 00:18:14.164 | 70.00th=[ 818], 80.00th=[ 953], 90.00th=[ 1150], 95.00th=[ 5000], 00:18:14.164 | 99.00th=[ 5067], 99.50th=[ 5067], 99.90th=[ 5067], 99.95th=[ 5067], 00:18:14.164 | 99.99th=[ 5067] 00:18:14.164 bw ( KiB/s): min= 6144, max=960512, per=8.34%, avg=216107.15, stdev=245264.38, samples=13 00:18:14.164 iops : min= 6, max= 938, avg=210.92, stdev=239.57, samples=13 00:18:14.164 lat (msec) : 100=0.27%, 250=25.80%, 500=18.67%, 750=22.40%, 1000=16.93% 00:18:14.164 lat (msec) : 2000=7.13%, >=2000=8.80% 00:18:14.164 cpu : usr=0.04%, sys=1.82%, ctx=1782, majf=0, minf=32769 00:18:14.164 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:18:14.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.164 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.164 issued rwts: total=1500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.164 job4: (groupid=0, jobs=1): err= 0: pid=3052088: Wed May 15 11:41:44 2024 00:18:14.164 read: IOPS=33, BW=33.6MiB/s (35.2MB/s)(364MiB/10843msec) 00:18:14.164 slat (usec): min=52, max=2118.6k, avg=29444.23, stdev=193684.92 00:18:14.164 clat (msec): min=123, max=5819, avg=3549.04, stdev=1649.72 00:18:14.164 lat (msec): min=648, max=6462, avg=3578.48, stdev=1649.26 00:18:14.164 clat percentiles (msec): 00:18:14.164 | 1.00th=[ 651], 5.00th=[ 1351], 10.00th=[ 1401], 20.00th=[ 1519], 00:18:14.164 | 30.00th=[ 2123], 40.00th=[ 3675], 50.00th=[ 3876], 60.00th=[ 4044], 00:18:14.164 | 70.00th=[ 5000], 80.00th=[ 5269], 90.00th=[ 5671], 95.00th=[ 5738], 00:18:14.165 | 99.00th=[ 5805], 99.50th=[ 5805], 99.90th=[ 5805], 99.95th=[ 5805], 00:18:14.165 | 99.99th=[ 5805] 00:18:14.165 bw ( KiB/s): min= 6144, max=184320, per=2.33%, avg=60416.00, stdev=56755.84, samples=8 00:18:14.165 iops : min= 6, max= 180, avg=59.00, stdev=55.43, samples=8 00:18:14.165 lat (msec) : 250=0.27%, 750=4.12%, 2000=20.60%, >=2000=75.00% 00:18:14.165 cpu : usr=0.00%, sys=1.05%, ctx=681, majf=0, minf=32769 00:18:14.165 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.8%, >=64=82.7% 00:18:14.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.165 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:18:14.165 issued rwts: total=364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.165 job4: (groupid=0, jobs=1): err= 0: pid=3052089: Wed May 15 11:41:44 2024 00:18:14.165 read: IOPS=58, BW=58.2MiB/s (61.0MB/s)(584MiB/10036msec) 00:18:14.165 slat (usec): min=46, max=2117.6k, avg=17122.12, stdev=155652.80 00:18:14.165 clat (msec): min=33, max=8178, avg=859.65, stdev=1606.33 00:18:14.165 lat (msec): min=50, max=8185, avg=876.77, stdev=1634.39 00:18:14.165 clat percentiles (msec): 00:18:14.165 | 1.00th=[ 54], 5.00th=[ 112], 10.00th=[ 199], 20.00th=[ 321], 00:18:14.165 | 30.00th=[ 397], 40.00th=[ 485], 50.00th=[ 567], 60.00th=[ 600], 00:18:14.165 | 70.00th=[ 625], 80.00th=[ 634], 90.00th=[ 651], 95.00th=[ 4933], 00:18:14.165 | 99.00th=[ 8154], 99.50th=[ 8154], 99.90th=[ 8154], 99.95th=[ 8154], 00:18:14.165 | 99.99th=[ 8154] 00:18:14.165 bw ( KiB/s): min=165888, max=221184, per=7.51%, avg=194560.00, stdev=27704.83, samples=3 00:18:14.165 iops : min= 162, max= 216, avg=190.00, stdev=27.06, samples=3 00:18:14.165 lat (msec) : 50=0.17%, 100=4.11%, 250=8.90%, 500=26.88%, 750=54.11% 00:18:14.165 lat (msec) : >=2000=5.82% 00:18:14.165 cpu : usr=0.00%, sys=1.61%, ctx=565, majf=0, minf=32769 00:18:14.165 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:18:14.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.165 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:14.165 issued rwts: total=584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.165 job4: (groupid=0, jobs=1): err= 0: pid=3052090: Wed May 15 11:41:44 2024 00:18:14.165 read: IOPS=18, BW=18.4MiB/s (19.3MB/s)(201MiB/10924msec) 00:18:14.165 slat (usec): min=55, max=2042.5k, avg=53705.62, stdev=280060.15 00:18:14.165 clat (msec): min=127, max=10344, avg=6585.45, stdev=3500.38 00:18:14.165 lat (msec): min=1169, max=10348, avg=6639.15, stdev=3478.98 00:18:14.165 clat percentiles (msec): 00:18:14.165 | 1.00th=[ 1167], 5.00th=[ 1284], 10.00th=[ 1385], 20.00th=[ 2072], 00:18:14.165 | 30.00th=[ 4144], 40.00th=[ 6275], 50.00th=[ 8423], 60.00th=[ 8557], 00:18:14.165 | 70.00th=[ 9597], 80.00th=[10000], 90.00th=[10134], 95.00th=[10268], 00:18:14.165 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:18:14.165 | 99.99th=[10402] 00:18:14.165 bw ( KiB/s): min= 8192, max=55296, per=0.96%, avg=24911.17, stdev=18835.99, samples=6 00:18:14.165 iops : min= 8, max= 54, avg=24.17, stdev=18.47, samples=6 00:18:14.165 lat (msec) : 250=0.50%, 2000=15.42%, >=2000=84.08% 00:18:14.165 cpu : usr=0.01%, sys=0.86%, ctx=435, majf=0, minf=32769 00:18:14.165 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=15.9%, >=64=68.7% 00:18:14.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.165 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:18:14.165 issued rwts: total=201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.165 job4: (groupid=0, jobs=1): err= 0: pid=3052091: Wed May 15 11:41:44 2024 00:18:14.165 read: IOPS=328, BW=328MiB/s (344MB/s)(3928MiB/11960msec) 00:18:14.165 slat (usec): min=41, max=2006.4k, avg=3021.84, stdev=32568.51 00:18:14.165 clat (msec): min=76, max=2426, avg=372.00, stdev=382.85 00:18:14.165 lat (msec): min=103, max=2428, avg=375.03, stdev=384.24 00:18:14.165 clat percentiles (msec): 00:18:14.165 | 1.00th=[ 113], 5.00th=[ 114], 10.00th=[ 114], 20.00th=[ 134], 00:18:14.165 | 30.00th=[ 255], 40.00th=[ 268], 50.00th=[ 284], 60.00th=[ 330], 00:18:14.165 | 70.00th=[ 376], 80.00th=[ 456], 90.00th=[ 527], 95.00th=[ 760], 00:18:14.165 | 99.00th=[ 2333], 99.50th=[ 2366], 99.90th=[ 2433], 99.95th=[ 2433], 00:18:14.165 | 99.99th=[ 2433] 00:18:14.165 bw ( KiB/s): min=143360, max=993280, per=15.81%, avg=409502.37, stdev=223581.45, samples=19 00:18:14.165 iops : min= 140, max= 970, avg=399.84, stdev=218.37, samples=19 00:18:14.165 lat (msec) : 100=0.03%, 250=28.13%, 500=57.10%, 750=9.67%, 1000=1.83% 00:18:14.165 lat (msec) : >=2000=3.23% 00:18:14.165 cpu : usr=0.08%, sys=2.99%, ctx=4293, majf=0, minf=32769 00:18:14.165 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:18:14.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.165 issued rwts: total=3928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.165 job4: (groupid=0, jobs=1): err= 0: pid=3052092: Wed May 15 11:41:44 2024 00:18:14.165 read: IOPS=40, BW=40.1MiB/s (42.0MB/s)(401MiB/10009msec) 00:18:14.165 slat (usec): min=47, max=2093.2k, avg=24936.51, stdev=203906.69 00:18:14.165 clat (msec): min=8, max=9187, avg=1276.33, stdev=2310.98 00:18:14.165 lat (msec): min=9, max=9188, avg=1301.27, stdev=2344.47 00:18:14.165 clat percentiles (msec): 00:18:14.165 | 1.00th=[ 12], 5.00th=[ 22], 10.00th=[ 37], 20.00th=[ 129], 00:18:14.165 | 30.00th=[ 253], 40.00th=[ 321], 50.00th=[ 418], 60.00th=[ 489], 00:18:14.165 | 70.00th=[ 542], 80.00th=[ 776], 90.00th=[ 5000], 95.00th=[ 7080], 00:18:14.165 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:18:14.165 | 99.99th=[ 9194] 00:18:14.165 bw ( KiB/s): min=161469, max=161469, per=6.23%, avg=161469.00, stdev= 0.00, samples=1 00:18:14.165 iops : min= 157, max= 157, avg=157.00, stdev= 0.00, samples=1 00:18:14.165 lat (msec) : 10=0.75%, 20=3.99%, 50=8.48%, 100=4.74%, 250=11.22% 00:18:14.165 lat (msec) : 500=34.16%, 750=15.71%, 1000=2.99%, >=2000=17.96% 00:18:14.165 cpu : usr=0.00%, sys=1.08%, ctx=405, majf=0, minf=32769 00:18:14.165 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.3% 00:18:14.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.165 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:18:14.165 issued rwts: total=401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.165 job4: (groupid=0, jobs=1): err= 0: pid=3052093: Wed May 15 11:41:44 2024 00:18:14.165 read: IOPS=46, BW=46.1MiB/s (48.4MB/s)(501MiB/10865msec) 00:18:14.165 slat (usec): min=57, max=2077.9k, avg=21431.91, stdev=182938.01 00:18:14.165 clat (msec): min=122, max=6954, avg=1506.14, stdev=1554.76 00:18:14.165 lat (msec): min=543, max=6957, avg=1527.57, stdev=1571.89 00:18:14.165 clat percentiles (msec): 00:18:14.165 | 1.00th=[ 542], 5.00th=[ 550], 10.00th=[ 558], 20.00th=[ 575], 00:18:14.165 | 30.00th=[ 600], 40.00th=[ 617], 50.00th=[ 625], 60.00th=[ 642], 00:18:14.165 | 70.00th=[ 2299], 80.00th=[ 2567], 90.00th=[ 2735], 95.00th=[ 4799], 00:18:14.165 | 99.00th=[ 6879], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:18:14.165 | 99.99th=[ 6946] 00:18:14.165 bw ( KiB/s): min=51200, max=233472, per=5.90%, avg=152777.80, stdev=83813.87, samples=5 00:18:14.165 iops : min= 50, max= 228, avg=149.00, stdev=81.70, samples=5 00:18:14.165 lat (msec) : 250=0.20%, 750=65.67%, >=2000=34.13% 00:18:14.165 cpu : usr=0.03%, sys=1.58%, ctx=412, majf=0, minf=32769 00:18:14.165 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.4% 00:18:14.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.165 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:18:14.165 issued rwts: total=501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.165 job4: (groupid=0, jobs=1): err= 0: pid=3052094: Wed May 15 11:41:44 2024 00:18:14.165 read: IOPS=21, BW=21.3MiB/s (22.3MB/s)(252MiB/11829msec) 00:18:14.165 slat (usec): min=48, max=2781.2k, avg=46933.63, stdev=292499.39 00:18:14.165 clat (usec): min=297, max=8289.1k, avg=2317431.20, stdev=1678459.13 00:18:14.165 lat (msec): min=622, max=8356, avg=2364.36, stdev=1711.67 00:18:14.165 clat percentiles (msec): 00:18:14.165 | 1.00th=[ 625], 5.00th=[ 642], 10.00th=[ 651], 20.00th=[ 701], 00:18:14.165 | 30.00th=[ 768], 40.00th=[ 852], 50.00th=[ 2937], 60.00th=[ 3037], 00:18:14.165 | 70.00th=[ 3071], 80.00th=[ 3205], 90.00th=[ 3406], 95.00th=[ 5134], 00:18:14.165 | 99.00th=[ 8221], 99.50th=[ 8288], 99.90th=[ 8288], 99.95th=[ 8288], 00:18:14.165 | 99.99th=[ 8288] 00:18:14.165 bw ( KiB/s): min= 2048, max=190464, per=3.27%, avg=84650.67, stdev=96328.60, samples=3 00:18:14.165 iops : min= 2, max= 186, avg=82.67, stdev=94.07, samples=3 00:18:14.165 lat (usec) : 500=0.40% 00:18:14.165 lat (msec) : 750=27.78%, 1000=14.68%, >=2000=57.14% 00:18:14.165 cpu : usr=0.01%, sys=1.04%, ctx=284, majf=0, minf=32769 00:18:14.165 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.3%, 32=12.7%, >=64=75.0% 00:18:14.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.165 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:18:14.165 issued rwts: total=252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.165 job4: (groupid=0, jobs=1): err= 0: pid=3052095: Wed May 15 11:41:44 2024 00:18:14.165 read: IOPS=25, BW=25.5MiB/s (26.8MB/s)(277MiB/10854msec) 00:18:14.165 slat (usec): min=52, max=2148.5k, avg=38730.96, stdev=246659.45 00:18:14.165 clat (msec): min=123, max=10042, avg=4761.50, stdev=4206.73 00:18:14.165 lat (msec): min=444, max=10043, avg=4800.23, stdev=4206.65 00:18:14.165 clat percentiles (msec): 00:18:14.165 | 1.00th=[ 443], 5.00th=[ 477], 10.00th=[ 510], 20.00th=[ 617], 00:18:14.165 | 30.00th=[ 869], 40.00th=[ 1116], 50.00th=[ 1435], 60.00th=[ 8792], 00:18:14.165 | 70.00th=[ 9060], 80.00th=[ 9463], 90.00th=[10000], 95.00th=[10000], 00:18:14.165 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:18:14.165 | 99.99th=[10000] 00:18:14.165 bw ( KiB/s): min= 2048, max=124928, per=1.47%, avg=38124.50, stdev=44311.29, samples=8 00:18:14.165 iops : min= 2, max= 122, avg=37.12, stdev=43.17, samples=8 00:18:14.165 lat (msec) : 250=0.36%, 500=8.66%, 750=18.41%, 1000=7.94%, 2000=14.80% 00:18:14.165 lat (msec) : >=2000=49.82% 00:18:14.165 cpu : usr=0.00%, sys=1.12%, ctx=510, majf=0, minf=32769 00:18:14.165 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.8%, 32=11.6%, >=64=77.3% 00:18:14.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.165 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:18:14.165 issued rwts: total=277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.165 job4: (groupid=0, jobs=1): err= 0: pid=3052096: Wed May 15 11:41:44 2024 00:18:14.166 read: IOPS=112, BW=113MiB/s (118MB/s)(1355MiB/11993msec) 00:18:14.166 slat (usec): min=42, max=2132.5k, avg=8795.29, stdev=98409.02 00:18:14.166 clat (msec): min=70, max=5087, avg=1097.50, stdev=1337.70 00:18:14.166 lat (msec): min=221, max=5089, avg=1106.29, stdev=1341.80 00:18:14.166 clat percentiles (msec): 00:18:14.166 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 275], 00:18:14.166 | 30.00th=[ 309], 40.00th=[ 468], 50.00th=[ 550], 60.00th=[ 659], 00:18:14.166 | 70.00th=[ 902], 80.00th=[ 1083], 90.00th=[ 2702], 95.00th=[ 5000], 00:18:14.166 | 99.00th=[ 5067], 99.50th=[ 5067], 99.90th=[ 5067], 99.95th=[ 5067], 00:18:14.166 | 99.99th=[ 5067] 00:18:14.166 bw ( KiB/s): min= 4096, max=471983, per=7.47%, avg=193372.23, stdev=138168.13, samples=13 00:18:14.166 iops : min= 4, max= 460, avg=188.77, stdev=134.78, samples=13 00:18:14.166 lat (msec) : 100=0.07%, 250=8.71%, 500=34.02%, 750=20.81%, 1000=10.04% 00:18:14.166 lat (msec) : 2000=6.72%, >=2000=19.63% 00:18:14.166 cpu : usr=0.00%, sys=1.79%, ctx=1810, majf=0, minf=32769 00:18:14.166 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.4% 00:18:14.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.166 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.166 issued rwts: total=1355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.166 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.166 job4: (groupid=0, jobs=1): err= 0: pid=3052097: Wed May 15 11:41:44 2024 00:18:14.166 read: IOPS=11, BW=11.0MiB/s (11.6MB/s)(120MiB/10867msec) 00:18:14.166 slat (usec): min=463, max=2046.2k, avg=89527.00, stdev=390145.78 00:18:14.166 clat (msec): min=122, max=10865, avg=4111.48, stdev=3533.69 00:18:14.166 lat (msec): min=1704, max=10866, avg=4201.01, stdev=3567.72 00:18:14.166 clat percentiles (msec): 00:18:14.166 | 1.00th=[ 1703], 5.00th=[ 1720], 10.00th=[ 1720], 20.00th=[ 1754], 00:18:14.166 | 30.00th=[ 1804], 40.00th=[ 1905], 50.00th=[ 1989], 60.00th=[ 2089], 00:18:14.166 | 70.00th=[ 4329], 80.00th=[ 8658], 90.00th=[10805], 95.00th=[10805], 00:18:14.166 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:18:14.166 | 99.99th=[10805] 00:18:14.166 lat (msec) : 250=0.83%, 2000=50.00%, >=2000=49.17% 00:18:14.166 cpu : usr=0.02%, sys=0.71%, ctx=243, majf=0, minf=30721 00:18:14.166 IO depths : 1=0.8%, 2=1.7%, 4=3.3%, 8=6.7%, 16=13.3%, 32=26.7%, >=64=47.5% 00:18:14.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.166 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:18:14.166 issued rwts: total=120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.166 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.166 job4: (groupid=0, jobs=1): err= 0: pid=3052098: Wed May 15 11:41:44 2024 00:18:14.166 read: IOPS=32, BW=32.9MiB/s (34.5MB/s)(358MiB/10892msec) 00:18:14.166 slat (usec): min=48, max=2073.6k, avg=30072.84, stdev=203835.53 00:18:14.166 clat (msec): min=123, max=8984, avg=3626.52, stdev=3291.90 00:18:14.166 lat (msec): min=587, max=8989, avg=3656.60, stdev=3297.86 00:18:14.166 clat percentiles (msec): 00:18:14.166 | 1.00th=[ 592], 5.00th=[ 634], 10.00th=[ 693], 20.00th=[ 785], 00:18:14.166 | 30.00th=[ 986], 40.00th=[ 1250], 50.00th=[ 1905], 60.00th=[ 2265], 00:18:14.166 | 70.00th=[ 7215], 80.00th=[ 7349], 90.00th=[ 8658], 95.00th=[ 8792], 00:18:14.166 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:18:14.166 | 99.99th=[ 8926] 00:18:14.166 bw ( KiB/s): min= 2048, max=141312, per=2.27%, avg=58848.25, stdev=60284.53, samples=8 00:18:14.166 iops : min= 2, max= 138, avg=57.38, stdev=58.75, samples=8 00:18:14.166 lat (msec) : 250=0.28%, 750=15.92%, 1000=13.97%, 2000=23.74%, >=2000=46.09% 00:18:14.166 cpu : usr=0.01%, sys=1.10%, ctx=586, majf=0, minf=32769 00:18:14.166 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.5%, 32=8.9%, >=64=82.4% 00:18:14.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.166 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:18:14.166 issued rwts: total=358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.166 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.166 job4: (groupid=0, jobs=1): err= 0: pid=3052099: Wed May 15 11:41:44 2024 00:18:14.166 read: IOPS=22, BW=22.8MiB/s (23.9MB/s)(249MiB/10904msec) 00:18:14.166 slat (usec): min=70, max=2112.4k, avg=43276.94, stdev=248854.12 00:18:14.166 clat (msec): min=126, max=9685, avg=5247.32, stdev=3329.31 00:18:14.166 lat (msec): min=1278, max=9690, avg=5290.60, stdev=3328.72 00:18:14.166 clat percentiles (msec): 00:18:14.166 | 1.00th=[ 1552], 5.00th=[ 1586], 10.00th=[ 1636], 20.00th=[ 1703], 00:18:14.166 | 30.00th=[ 1804], 40.00th=[ 2039], 50.00th=[ 6409], 60.00th=[ 8356], 00:18:14.166 | 70.00th=[ 8490], 80.00th=[ 8557], 90.00th=[ 8658], 95.00th=[ 8658], 00:18:14.166 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:18:14.166 | 99.99th=[ 9731] 00:18:14.166 bw ( KiB/s): min= 2043, max=176128, per=1.37%, avg=35402.14, stdev=64283.18, samples=7 00:18:14.166 iops : min= 1, max= 172, avg=34.43, stdev=62.86, samples=7 00:18:14.166 lat (msec) : 250=0.40%, 2000=38.55%, >=2000=61.04% 00:18:14.166 cpu : usr=0.00%, sys=0.78%, ctx=480, majf=0, minf=32769 00:18:14.166 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.4%, 32=12.9%, >=64=74.7% 00:18:14.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.166 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:18:14.166 issued rwts: total=249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.166 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.166 job5: (groupid=0, jobs=1): err= 0: pid=3052100: Wed May 15 11:41:44 2024 00:18:14.166 read: IOPS=128, BW=129MiB/s (135MB/s)(1540MiB/11946msec) 00:18:14.166 slat (usec): min=50, max=2031.6k, avg=7704.87, stdev=80289.08 00:18:14.166 clat (msec): min=76, max=3826, avg=869.77, stdev=1006.65 00:18:14.166 lat (msec): min=138, max=3828, avg=877.48, stdev=1011.35 00:18:14.166 clat percentiles (msec): 00:18:14.166 | 1.00th=[ 140], 5.00th=[ 140], 10.00th=[ 140], 20.00th=[ 284], 00:18:14.166 | 30.00th=[ 355], 40.00th=[ 401], 50.00th=[ 460], 60.00th=[ 502], 00:18:14.166 | 70.00th=[ 558], 80.00th=[ 1070], 90.00th=[ 2299], 95.00th=[ 3641], 00:18:14.166 | 99.00th=[ 3775], 99.50th=[ 3809], 99.90th=[ 3842], 99.95th=[ 3842], 00:18:14.166 | 99.99th=[ 3842] 00:18:14.166 bw ( KiB/s): min= 2048, max=444416, per=7.57%, avg=196153.71, stdev=145567.23, samples=14 00:18:14.166 iops : min= 2, max= 434, avg=191.43, stdev=142.10, samples=14 00:18:14.166 lat (msec) : 100=0.06%, 250=16.17%, 500=43.38%, 750=13.64%, 1000=4.55% 00:18:14.166 lat (msec) : 2000=5.45%, >=2000=16.75% 00:18:14.166 cpu : usr=0.03%, sys=1.36%, ctx=2657, majf=0, minf=32769 00:18:14.166 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:18:14.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.166 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.166 issued rwts: total=1540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.166 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.166 job5: (groupid=0, jobs=1): err= 0: pid=3052101: Wed May 15 11:41:44 2024 00:18:14.166 read: IOPS=61, BW=61.3MiB/s (64.3MB/s)(670MiB/10927msec) 00:18:14.166 slat (usec): min=56, max=2073.8k, avg=16114.34, stdev=134600.30 00:18:14.166 clat (msec): min=126, max=5428, avg=1991.30, stdev=1623.79 00:18:14.166 lat (msec): min=509, max=5448, avg=2007.41, stdev=1627.08 00:18:14.166 clat percentiles (msec): 00:18:14.166 | 1.00th=[ 527], 5.00th=[ 558], 10.00th=[ 584], 20.00th=[ 676], 00:18:14.166 | 30.00th=[ 760], 40.00th=[ 860], 50.00th=[ 1150], 60.00th=[ 1318], 00:18:14.166 | 70.00th=[ 2668], 80.00th=[ 3104], 90.00th=[ 5000], 95.00th=[ 5269], 00:18:14.166 | 99.00th=[ 5403], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:18:14.166 | 99.99th=[ 5403] 00:18:14.166 bw ( KiB/s): min=28672, max=206435, per=4.29%, avg=110988.10, stdev=47490.06, samples=10 00:18:14.166 iops : min= 28, max= 201, avg=108.20, stdev=46.26, samples=10 00:18:14.166 lat (msec) : 250=0.15%, 500=0.15%, 750=27.31%, 1000=14.33%, 2000=18.06% 00:18:14.166 lat (msec) : >=2000=40.00% 00:18:14.166 cpu : usr=0.05%, sys=1.35%, ctx=934, majf=0, minf=32769 00:18:14.166 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:18:14.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.166 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:14.166 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.166 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.166 job5: (groupid=0, jobs=1): err= 0: pid=3052102: Wed May 15 11:41:44 2024 00:18:14.166 read: IOPS=136, BW=136MiB/s (143MB/s)(1621MiB/11887msec) 00:18:14.166 slat (usec): min=53, max=2017.9k, avg=7280.58, stdev=75162.88 00:18:14.166 clat (msec): min=78, max=2515, avg=772.87, stdev=698.50 00:18:14.166 lat (msec): min=131, max=2523, avg=780.15, stdev=702.06 00:18:14.166 clat percentiles (msec): 00:18:14.166 | 1.00th=[ 132], 5.00th=[ 133], 10.00th=[ 133], 20.00th=[ 134], 00:18:14.166 | 30.00th=[ 372], 40.00th=[ 468], 50.00th=[ 550], 60.00th=[ 600], 00:18:14.166 | 70.00th=[ 768], 80.00th=[ 1234], 90.00th=[ 2089], 95.00th=[ 2232], 00:18:14.166 | 99.00th=[ 2433], 99.50th=[ 2467], 99.90th=[ 2500], 99.95th=[ 2500], 00:18:14.166 | 99.99th=[ 2500] 00:18:14.166 bw ( KiB/s): min=10240, max=755712, per=7.66%, avg=198332.33, stdev=173287.89, samples=15 00:18:14.166 iops : min= 10, max= 738, avg=193.60, stdev=169.26, samples=15 00:18:14.166 lat (msec) : 100=0.06%, 250=29.18%, 500=13.63%, 750=25.85%, 1000=3.39% 00:18:14.166 lat (msec) : 2000=14.62%, >=2000=13.26% 00:18:14.166 cpu : usr=0.03%, sys=1.86%, ctx=1912, majf=0, minf=32769 00:18:14.166 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:18:14.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.166 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.166 issued rwts: total=1621,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.166 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.166 job5: (groupid=0, jobs=1): err= 0: pid=3052103: Wed May 15 11:41:44 2024 00:18:14.166 read: IOPS=138, BW=138MiB/s (145MB/s)(1779MiB/12851msec) 00:18:14.166 slat (usec): min=50, max=2043.2k, avg=6012.28, stdev=68963.00 00:18:14.166 clat (msec): min=218, max=4871, avg=761.78, stdev=1102.33 00:18:14.166 lat (msec): min=220, max=4875, avg=767.80, stdev=1107.01 00:18:14.166 clat percentiles (msec): 00:18:14.166 | 1.00th=[ 220], 5.00th=[ 222], 10.00th=[ 226], 20.00th=[ 239], 00:18:14.166 | 30.00th=[ 251], 40.00th=[ 414], 50.00th=[ 451], 60.00th=[ 502], 00:18:14.166 | 70.00th=[ 567], 80.00th=[ 625], 90.00th=[ 1083], 95.00th=[ 4396], 00:18:14.166 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4866], 99.95th=[ 4866], 00:18:14.166 | 99.99th=[ 4866] 00:18:14.166 bw ( KiB/s): min= 2048, max=534528, per=9.33%, avg=241664.00, stdev=171893.16, samples=14 00:18:14.166 iops : min= 2, max= 522, avg=236.00, stdev=167.86, samples=14 00:18:14.166 lat (msec) : 250=30.07%, 500=29.57%, 750=25.91%, 1000=2.75%, 2000=3.49% 00:18:14.166 lat (msec) : >=2000=8.21% 00:18:14.166 cpu : usr=0.05%, sys=2.08%, ctx=1729, majf=0, minf=32769 00:18:14.166 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:18:14.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.167 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.167 issued rwts: total=1779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.167 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.167 job5: (groupid=0, jobs=1): err= 0: pid=3052104: Wed May 15 11:41:44 2024 00:18:14.167 read: IOPS=138, BW=138MiB/s (145MB/s)(1382MiB/10013msec) 00:18:14.167 slat (usec): min=46, max=2046.2k, avg=7232.52, stdev=80540.81 00:18:14.167 clat (msec): min=12, max=5189, avg=526.40, stdev=608.72 00:18:14.167 lat (msec): min=12, max=5214, avg=533.63, stdev=627.68 00:18:14.167 clat percentiles (msec): 00:18:14.167 | 1.00th=[ 25], 5.00th=[ 74], 10.00th=[ 124], 20.00th=[ 131], 00:18:14.167 | 30.00th=[ 279], 40.00th=[ 321], 50.00th=[ 388], 60.00th=[ 418], 00:18:14.167 | 70.00th=[ 498], 80.00th=[ 550], 90.00th=[ 1368], 95.00th=[ 1838], 00:18:14.167 | 99.00th=[ 3104], 99.50th=[ 5134], 99.90th=[ 5201], 99.95th=[ 5201], 00:18:14.167 | 99.99th=[ 5201] 00:18:14.167 bw ( KiB/s): min=14336, max=439441, per=7.99%, avg=206977.89, stdev=145392.30, samples=9 00:18:14.167 iops : min= 14, max= 429, avg=202.11, stdev=141.96, samples=9 00:18:14.167 lat (msec) : 20=0.65%, 50=2.97%, 100=3.18%, 250=17.44%, 500=46.02% 00:18:14.167 lat (msec) : 750=16.86%, 1000=0.14%, 2000=11.65%, >=2000=1.09% 00:18:14.167 cpu : usr=0.04%, sys=1.65%, ctx=2123, majf=0, minf=32769 00:18:14.167 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:18:14.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.167 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.167 issued rwts: total=1382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.167 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.167 job5: (groupid=0, jobs=1): err= 0: pid=3052105: Wed May 15 11:41:44 2024 00:18:14.167 read: IOPS=159, BW=159MiB/s (167MB/s)(1733MiB/10892msec) 00:18:14.167 slat (usec): min=43, max=2029.5k, avg=6204.59, stdev=69187.66 00:18:14.167 clat (msec): min=129, max=2740, avg=755.94, stdev=770.93 00:18:14.167 lat (msec): min=179, max=2745, avg=762.14, stdev=773.23 00:18:14.167 clat percentiles (msec): 00:18:14.167 | 1.00th=[ 184], 5.00th=[ 203], 10.00th=[ 249], 20.00th=[ 253], 00:18:14.167 | 30.00th=[ 259], 40.00th=[ 271], 50.00th=[ 284], 60.00th=[ 518], 00:18:14.167 | 70.00th=[ 667], 80.00th=[ 1687], 90.00th=[ 2022], 95.00th=[ 2601], 00:18:14.167 | 99.00th=[ 2702], 99.50th=[ 2735], 99.90th=[ 2735], 99.95th=[ 2735], 00:18:14.167 | 99.99th=[ 2735] 00:18:14.167 bw ( KiB/s): min=10240, max=583680, per=9.76%, avg=252862.92, stdev=181051.93, samples=13 00:18:14.167 iops : min= 10, max= 570, avg=246.77, stdev=176.76, samples=13 00:18:14.167 lat (msec) : 250=10.33%, 500=47.61%, 750=17.48%, 1000=2.60%, 2000=9.17% 00:18:14.167 lat (msec) : >=2000=12.81% 00:18:14.167 cpu : usr=0.02%, sys=2.21%, ctx=1848, majf=0, minf=32769 00:18:14.167 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:18:14.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.167 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.167 issued rwts: total=1733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.167 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.167 job5: (groupid=0, jobs=1): err= 0: pid=3052106: Wed May 15 11:41:44 2024 00:18:14.167 read: IOPS=48, BW=48.3MiB/s (50.6MB/s)(530MiB/10973msec) 00:18:14.167 slat (usec): min=47, max=2112.3k, avg=20456.77, stdev=175248.21 00:18:14.167 clat (msec): min=127, max=7039, avg=2560.08, stdev=2503.92 00:18:14.167 lat (msec): min=492, max=7043, avg=2580.53, stdev=2507.59 00:18:14.167 clat percentiles (msec): 00:18:14.167 | 1.00th=[ 493], 5.00th=[ 510], 10.00th=[ 527], 20.00th=[ 600], 00:18:14.167 | 30.00th=[ 676], 40.00th=[ 760], 50.00th=[ 885], 60.00th=[ 2265], 00:18:14.167 | 70.00th=[ 2534], 80.00th=[ 6946], 90.00th=[ 7013], 95.00th=[ 7013], 00:18:14.167 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:18:14.167 | 99.99th=[ 7013] 00:18:14.167 bw ( KiB/s): min= 2048, max=256000, per=3.97%, avg=102912.00, stdev=91430.56, samples=8 00:18:14.167 iops : min= 2, max= 250, avg=100.50, stdev=89.29, samples=8 00:18:14.167 lat (msec) : 250=0.19%, 500=2.45%, 750=36.98%, 1000=10.38%, 2000=0.94% 00:18:14.167 lat (msec) : >=2000=49.06% 00:18:14.167 cpu : usr=0.03%, sys=1.32%, ctx=525, majf=0, minf=32769 00:18:14.167 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.1% 00:18:14.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.167 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:14.167 issued rwts: total=530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.167 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.167 job5: (groupid=0, jobs=1): err= 0: pid=3052107: Wed May 15 11:41:44 2024 00:18:14.167 read: IOPS=91, BW=91.3MiB/s (95.7MB/s)(998MiB/10932msec) 00:18:14.167 slat (usec): min=40, max=2022.1k, avg=10821.66, stdev=88932.92 00:18:14.167 clat (msec): min=126, max=3264, avg=1214.81, stdev=1003.67 00:18:14.167 lat (msec): min=321, max=4465, avg=1225.64, stdev=1009.37 00:18:14.167 clat percentiles (msec): 00:18:14.167 | 1.00th=[ 321], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 426], 00:18:14.167 | 30.00th=[ 575], 40.00th=[ 617], 50.00th=[ 693], 60.00th=[ 802], 00:18:14.167 | 70.00th=[ 1150], 80.00th=[ 2534], 90.00th=[ 2970], 95.00th=[ 3071], 00:18:14.167 | 99.00th=[ 3205], 99.50th=[ 3205], 99.90th=[ 3272], 99.95th=[ 3272], 00:18:14.167 | 99.99th=[ 3272] 00:18:14.167 bw ( KiB/s): min=22528, max=394004, per=5.73%, avg=148508.75, stdev=106878.19, samples=12 00:18:14.167 iops : min= 22, max= 384, avg=144.83, stdev=104.22, samples=12 00:18:14.167 lat (msec) : 250=0.10%, 500=22.44%, 750=32.26%, 1000=12.12%, 2000=4.81% 00:18:14.167 lat (msec) : >=2000=28.26% 00:18:14.167 cpu : usr=0.08%, sys=1.39%, ctx=1481, majf=0, minf=32769 00:18:14.167 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:18:14.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.167 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.167 issued rwts: total=998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.167 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.167 job5: (groupid=0, jobs=1): err= 0: pid=3052108: Wed May 15 11:41:44 2024 00:18:14.167 read: IOPS=103, BW=103MiB/s (108MB/s)(1134MiB/10961msec) 00:18:14.167 slat (usec): min=46, max=2040.9k, avg=9547.42, stdev=120024.83 00:18:14.167 clat (msec): min=130, max=7064, avg=683.51, stdev=1303.38 00:18:14.167 lat (msec): min=135, max=7081, avg=693.06, stdev=1317.89 00:18:14.167 clat percentiles (msec): 00:18:14.167 | 1.00th=[ 136], 5.00th=[ 136], 10.00th=[ 138], 20.00th=[ 138], 00:18:14.167 | 30.00th=[ 144], 40.00th=[ 192], 50.00th=[ 230], 60.00th=[ 262], 00:18:14.167 | 70.00th=[ 275], 80.00th=[ 464], 90.00th=[ 2333], 95.00th=[ 2400], 00:18:14.167 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:18:14.167 | 99.99th=[ 7080] 00:18:14.167 bw ( KiB/s): min=350208, max=776192, per=19.89%, avg=515072.00, stdev=201506.97, samples=4 00:18:14.167 iops : min= 342, max= 758, avg=503.00, stdev=196.78, samples=4 00:18:14.167 lat (msec) : 250=54.32%, 500=26.81%, 750=3.97%, >=2000=14.90% 00:18:14.167 cpu : usr=0.03%, sys=1.45%, ctx=1288, majf=0, minf=32769 00:18:14.167 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:18:14.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.167 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.167 issued rwts: total=1134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.167 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.167 job5: (groupid=0, jobs=1): err= 0: pid=3052109: Wed May 15 11:41:44 2024 00:18:14.167 read: IOPS=55, BW=55.8MiB/s (58.5MB/s)(662MiB/11866msec) 00:18:14.167 slat (usec): min=83, max=3089.9k, avg=17920.12, stdev=166918.32 00:18:14.167 clat (usec): min=962, max=6706.0k, avg=1880556.56, stdev=2021003.77 00:18:14.167 lat (msec): min=271, max=6716, avg=1898.48, stdev=2028.60 00:18:14.167 clat percentiles (msec): 00:18:14.167 | 1.00th=[ 271], 5.00th=[ 284], 10.00th=[ 305], 20.00th=[ 368], 00:18:14.167 | 30.00th=[ 451], 40.00th=[ 510], 50.00th=[ 558], 60.00th=[ 575], 00:18:14.167 | 70.00th=[ 3440], 80.00th=[ 3675], 90.00th=[ 5336], 95.00th=[ 5470], 00:18:14.167 | 99.00th=[ 6678], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 6678], 00:18:14.167 | 99.99th=[ 6678] 00:18:14.167 bw ( KiB/s): min=55296, max=390387, per=8.43%, avg=218400.00, stdev=126271.46, samples=5 00:18:14.167 iops : min= 54, max= 381, avg=213.00, stdev=123.27, samples=5 00:18:14.167 lat (usec) : 1000=0.15% 00:18:14.167 lat (msec) : 500=38.67%, 750=25.08%, >=2000=36.10% 00:18:14.167 cpu : usr=0.03%, sys=1.03%, ctx=1709, majf=0, minf=32769 00:18:14.167 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:18:14.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.168 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:14.168 issued rwts: total=662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.168 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.168 job5: (groupid=0, jobs=1): err= 0: pid=3052110: Wed May 15 11:41:44 2024 00:18:14.168 read: IOPS=43, BW=43.6MiB/s (45.7MB/s)(520MiB/11931msec) 00:18:14.168 slat (usec): min=59, max=2119.4k, avg=22787.34, stdev=177024.55 00:18:14.168 clat (msec): min=78, max=7610, avg=2611.14, stdev=2461.95 00:18:14.168 lat (msec): min=472, max=7628, avg=2633.93, stdev=2468.85 00:18:14.168 clat percentiles (msec): 00:18:14.168 | 1.00th=[ 472], 5.00th=[ 523], 10.00th=[ 550], 20.00th=[ 617], 00:18:14.168 | 30.00th=[ 667], 40.00th=[ 802], 50.00th=[ 1804], 60.00th=[ 2366], 00:18:14.168 | 70.00th=[ 2567], 80.00th=[ 6611], 90.00th=[ 6745], 95.00th=[ 6812], 00:18:14.168 | 99.00th=[ 7550], 99.50th=[ 7550], 99.90th=[ 7617], 99.95th=[ 7617], 00:18:14.168 | 99.99th=[ 7617] 00:18:14.168 bw ( KiB/s): min= 4096, max=243712, per=3.67%, avg=95127.62, stdev=88865.73, samples=8 00:18:14.168 iops : min= 4, max= 238, avg=92.88, stdev=86.78, samples=8 00:18:14.168 lat (msec) : 100=0.19%, 500=2.50%, 750=35.00%, 1000=10.38%, 2000=1.92% 00:18:14.168 lat (msec) : >=2000=50.00% 00:18:14.168 cpu : usr=0.02%, sys=0.79%, ctx=575, majf=0, minf=32769 00:18:14.168 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.2%, >=64=87.9% 00:18:14.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.168 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:18:14.168 issued rwts: total=520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.168 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.168 job5: (groupid=0, jobs=1): err= 0: pid=3052111: Wed May 15 11:41:44 2024 00:18:14.168 read: IOPS=90, BW=90.5MiB/s (94.9MB/s)(906MiB/10015msec) 00:18:14.168 slat (usec): min=56, max=2186.3k, avg=11029.79, stdev=121553.36 00:18:14.168 clat (msec): min=14, max=6862, avg=1358.22, stdev=2107.09 00:18:14.168 lat (msec): min=29, max=6865, avg=1369.25, stdev=2114.58 00:18:14.168 clat percentiles (msec): 00:18:14.168 | 1.00th=[ 37], 5.00th=[ 102], 10.00th=[ 321], 20.00th=[ 414], 00:18:14.168 | 30.00th=[ 430], 40.00th=[ 468], 50.00th=[ 498], 60.00th=[ 535], 00:18:14.168 | 70.00th=[ 625], 80.00th=[ 676], 90.00th=[ 6812], 95.00th=[ 6879], 00:18:14.168 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:18:14.168 | 99.99th=[ 6879] 00:18:14.168 bw ( KiB/s): min=24576, max=303104, per=6.84%, avg=177265.78, stdev=105696.00, samples=9 00:18:14.168 iops : min= 24, max= 296, avg=173.11, stdev=103.22, samples=9 00:18:14.168 lat (msec) : 20=0.11%, 50=1.66%, 100=3.09%, 250=3.64%, 500=43.27% 00:18:14.168 lat (msec) : 750=31.57%, >=2000=16.67% 00:18:14.168 cpu : usr=0.10%, sys=2.31%, ctx=780, majf=0, minf=32769 00:18:14.168 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.0% 00:18:14.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.168 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.168 issued rwts: total=906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.168 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.168 job5: (groupid=0, jobs=1): err= 0: pid=3052112: Wed May 15 11:41:44 2024 00:18:14.168 read: IOPS=17, BW=17.2MiB/s (18.0MB/s)(188MiB/10947msec) 00:18:14.168 slat (usec): min=162, max=2059.6k, avg=57530.21, stdev=315273.88 00:18:14.168 clat (msec): min=130, max=10800, avg=7165.30, stdev=3107.70 00:18:14.168 lat (msec): min=2115, max=10828, avg=7222.83, stdev=3074.31 00:18:14.168 clat percentiles (msec): 00:18:14.168 | 1.00th=[ 2123], 5.00th=[ 2232], 10.00th=[ 2265], 20.00th=[ 4329], 00:18:14.168 | 30.00th=[ 4396], 40.00th=[ 6342], 50.00th=[ 8356], 60.00th=[ 8658], 00:18:14.168 | 70.00th=[10402], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:18:14.168 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:18:14.168 | 99.99th=[10805] 00:18:14.168 bw ( KiB/s): min= 2048, max=32768, per=0.95%, avg=24576.00, stdev=12871.48, samples=5 00:18:14.168 iops : min= 2, max= 32, avg=24.00, stdev=12.57, samples=5 00:18:14.168 lat (msec) : 250=0.53%, >=2000=99.47% 00:18:14.168 cpu : usr=0.02%, sys=1.11%, ctx=155, majf=0, minf=32769 00:18:14.168 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.3%, 16=8.5%, 32=17.0%, >=64=66.5% 00:18:14.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.168 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:18:14.168 issued rwts: total=188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.168 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.168 00:18:14.168 Run status group 0 (all jobs): 00:18:14.168 READ: bw=2529MiB/s (2652MB/s), 1666KiB/s-328MiB/s (1706kB/s-344MB/s), io=34.8GiB (37.3GB), run=10009-14080msec 00:18:14.168 00:18:14.168 Disk stats (read/write): 00:18:14.168 nvme0n1: ios=13798/0, merge=0/0, ticks=10753434/0, in_queue=10753434, util=98.84% 00:18:14.168 nvme1n1: ios=36452/0, merge=0/0, ticks=11451246/0, in_queue=11451246, util=98.51% 00:18:14.168 nvme2n1: ios=37884/0, merge=0/0, ticks=10178365/0, in_queue=10178365, util=99.03% 00:18:14.168 nvme3n1: ios=6574/0, merge=0/0, ticks=10838475/0, in_queue=10838475, util=98.96% 00:18:14.168 nvme4n1: ios=80465/0, merge=0/0, ticks=9318511/0, in_queue=9318511, util=99.02% 00:18:14.168 nvme5n1: ios=109302/0, merge=0/0, ticks=11173653/0, in_queue=11173653, util=99.32% 00:18:14.168 11:41:44 -- target/srq_overwhelm.sh@38 -- # sync 00:18:14.168 11:41:44 -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:18:14.168 11:41:44 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:18:14.168 11:41:44 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:18:15.183 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.183 11:41:45 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:18:15.183 11:41:45 -- common/autotest_common.sh@1215 -- # local i=0 00:18:15.183 11:41:45 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:15.183 11:41:45 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000000 00:18:15.183 11:41:45 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:15.183 11:41:45 -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000000 00:18:15.183 11:41:45 -- common/autotest_common.sh@1227 -- # return 0 00:18:15.183 11:41:45 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:15.183 11:41:45 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.183 11:41:45 -- common/autotest_common.sh@10 -- # set +x 00:18:15.183 11:41:45 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.183 11:41:45 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:18:15.183 11:41:45 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:16.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:16.122 11:41:46 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:18:16.122 11:41:46 -- common/autotest_common.sh@1215 -- # local i=0 00:18:16.122 11:41:46 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:16.122 11:41:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000001 00:18:16.122 11:41:46 -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000001 00:18:16.122 11:41:46 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:16.122 11:41:46 -- common/autotest_common.sh@1227 -- # return 0 00:18:16.122 11:41:46 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:16.122 11:41:46 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.122 11:41:46 -- common/autotest_common.sh@10 -- # set +x 00:18:16.122 11:41:46 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.122 11:41:46 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:18:16.122 11:41:46 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:17.058 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:17.058 11:41:47 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:18:17.058 11:41:47 -- common/autotest_common.sh@1215 -- # local i=0 00:18:17.058 11:41:47 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:17.058 11:41:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000002 00:18:17.316 11:41:47 -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000002 00:18:17.316 11:41:47 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:17.316 11:41:47 -- common/autotest_common.sh@1227 -- # return 0 00:18:17.316 11:41:47 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:17.316 11:41:47 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.316 11:41:47 -- common/autotest_common.sh@10 -- # set +x 00:18:17.316 11:41:47 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.316 11:41:47 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:18:17.316 11:41:47 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:18.254 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:18.254 11:41:48 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:18:18.254 11:41:48 -- common/autotest_common.sh@1215 -- # local i=0 00:18:18.254 11:41:48 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:18.254 11:41:48 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000003 00:18:18.254 11:41:48 -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000003 00:18:18.254 11:41:48 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:18.254 11:41:48 -- common/autotest_common.sh@1227 -- # return 0 00:18:18.254 11:41:48 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:18.254 11:41:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.254 11:41:48 -- common/autotest_common.sh@10 -- # set +x 00:18:18.254 11:41:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.254 11:41:48 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:18:18.254 11:41:48 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:19.190 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:19.190 11:41:49 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:18:19.190 11:41:49 -- common/autotest_common.sh@1215 -- # local i=0 00:18:19.190 11:41:49 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:19.190 11:41:49 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000004 00:18:19.190 11:41:49 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:19.190 11:41:49 -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000004 00:18:19.190 11:41:49 -- common/autotest_common.sh@1227 -- # return 0 00:18:19.190 11:41:49 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:19.190 11:41:49 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.190 11:41:49 -- common/autotest_common.sh@10 -- # set +x 00:18:19.190 11:41:49 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.190 11:41:49 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:18:19.190 11:41:49 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:20.126 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:20.126 11:41:50 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:18:20.126 11:41:50 -- common/autotest_common.sh@1215 -- # local i=0 00:18:20.126 11:41:50 -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:20.126 11:41:50 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000005 00:18:20.126 11:41:50 -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000005 00:18:20.126 11:41:50 -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:20.126 11:41:50 -- common/autotest_common.sh@1227 -- # return 0 00:18:20.126 11:41:50 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:20.126 11:41:50 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.126 11:41:50 -- common/autotest_common.sh@10 -- # set +x 00:18:20.126 11:41:50 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.126 11:41:50 -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:20.126 11:41:50 -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:18:20.126 11:41:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:20.126 11:41:50 -- nvmf/common.sh@117 -- # sync 00:18:20.126 11:41:50 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:20.126 11:41:50 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:20.126 11:41:50 -- nvmf/common.sh@120 -- # set +e 00:18:20.127 11:41:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:20.127 11:41:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:20.127 rmmod nvme_rdma 00:18:20.127 rmmod nvme_fabrics 00:18:20.387 11:41:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:20.387 11:41:50 -- nvmf/common.sh@124 -- # set -e 00:18:20.387 11:41:50 -- nvmf/common.sh@125 -- # return 0 00:18:20.387 11:41:50 -- nvmf/common.sh@478 -- # '[' -n 3050882 ']' 00:18:20.387 11:41:50 -- nvmf/common.sh@479 -- # killprocess 3050882 00:18:20.387 11:41:50 -- common/autotest_common.sh@946 -- # '[' -z 3050882 ']' 00:18:20.387 11:41:50 -- common/autotest_common.sh@950 -- # kill -0 3050882 00:18:20.387 11:41:50 -- common/autotest_common.sh@951 -- # uname 00:18:20.387 11:41:50 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:20.387 11:41:50 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3050882 00:18:20.387 11:41:50 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:20.387 11:41:50 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:20.387 11:41:50 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3050882' 00:18:20.387 killing process with pid 3050882 00:18:20.387 11:41:50 -- common/autotest_common.sh@965 -- # kill 3050882 00:18:20.387 [2024-05-15 11:41:50.961314] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:20.387 11:41:50 -- common/autotest_common.sh@970 -- # wait 3050882 00:18:20.387 [2024-05-15 11:41:51.013068] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:18:20.646 11:41:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:20.646 11:41:51 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:20.646 00:18:20.646 real 0m34.638s 00:18:20.646 user 1m58.307s 00:18:20.646 sys 0m15.242s 00:18:20.646 11:41:51 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:20.646 11:41:51 -- common/autotest_common.sh@10 -- # set +x 00:18:20.646 ************************************ 00:18:20.646 END TEST nvmf_srq_overwhelm 00:18:20.646 ************************************ 00:18:20.905 11:41:51 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:18:20.905 11:41:51 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:20.905 11:41:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:20.905 11:41:51 -- common/autotest_common.sh@10 -- # set +x 00:18:20.905 ************************************ 00:18:20.905 START TEST nvmf_shutdown 00:18:20.905 ************************************ 00:18:20.905 11:41:51 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:18:20.905 * Looking for test storage... 00:18:20.905 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:20.905 11:41:51 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.905 11:41:51 -- nvmf/common.sh@7 -- # uname -s 00:18:20.905 11:41:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.905 11:41:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.905 11:41:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.905 11:41:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.905 11:41:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.905 11:41:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.905 11:41:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.905 11:41:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.905 11:41:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.905 11:41:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.905 11:41:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:18:20.905 11:41:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:18:20.905 11:41:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.905 11:41:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.905 11:41:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:20.905 11:41:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.905 11:41:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:20.905 11:41:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.905 11:41:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.905 11:41:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.906 11:41:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.906 11:41:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.906 11:41:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.906 11:41:51 -- paths/export.sh@5 -- # export PATH 00:18:20.906 11:41:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.906 11:41:51 -- nvmf/common.sh@47 -- # : 0 00:18:20.906 11:41:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:20.906 11:41:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:20.906 11:41:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.906 11:41:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.906 11:41:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.906 11:41:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:20.906 11:41:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:20.906 11:41:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:20.906 11:41:51 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:20.906 11:41:51 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:20.906 11:41:51 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:18:20.906 11:41:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:20.906 11:41:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:20.906 11:41:51 -- common/autotest_common.sh@10 -- # set +x 00:18:20.906 ************************************ 00:18:20.906 START TEST nvmf_shutdown_tc1 00:18:20.906 ************************************ 00:18:20.906 11:41:51 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:18:20.906 11:41:51 -- target/shutdown.sh@74 -- # starttarget 00:18:20.906 11:41:51 -- target/shutdown.sh@15 -- # nvmftestinit 00:18:20.906 11:41:51 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:20.906 11:41:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.906 11:41:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:20.906 11:41:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:20.906 11:41:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:20.906 11:41:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.906 11:41:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.906 11:41:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.906 11:41:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:20.906 11:41:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:20.906 11:41:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:20.906 11:41:51 -- common/autotest_common.sh@10 -- # set +x 00:18:27.478 11:41:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:27.478 11:41:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:27.478 11:41:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:27.478 11:41:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:27.478 11:41:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:27.478 11:41:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:27.478 11:41:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:27.478 11:41:57 -- nvmf/common.sh@295 -- # net_devs=() 00:18:27.478 11:41:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:27.478 11:41:57 -- nvmf/common.sh@296 -- # e810=() 00:18:27.478 11:41:57 -- nvmf/common.sh@296 -- # local -ga e810 00:18:27.478 11:41:57 -- nvmf/common.sh@297 -- # x722=() 00:18:27.478 11:41:57 -- nvmf/common.sh@297 -- # local -ga x722 00:18:27.478 11:41:57 -- nvmf/common.sh@298 -- # mlx=() 00:18:27.478 11:41:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:27.478 11:41:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.478 11:41:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.478 11:41:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.478 11:41:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.478 11:41:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.478 11:41:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.478 11:41:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.478 11:41:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.478 11:41:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.478 11:41:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.478 11:41:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.478 11:41:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:27.478 11:41:57 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:27.478 11:41:57 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:27.478 11:41:57 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:27.478 11:41:57 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:27.478 11:41:57 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:27.478 11:41:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:27.478 11:41:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.478 11:41:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:27.478 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:27.478 11:41:57 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:27.478 11:41:57 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:27.478 11:41:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:27.478 11:41:57 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:27.478 11:41:57 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:27.478 11:41:57 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:27.478 11:41:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.478 11:41:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:27.478 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:27.479 11:41:57 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:27.479 11:41:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:27.479 11:41:57 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.479 11:41:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.479 11:41:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:27.479 11:41:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.479 11:41:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:27.479 Found net devices under 0000:18:00.0: mlx_0_0 00:18:27.479 11:41:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.479 11:41:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.479 11:41:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.479 11:41:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:27.479 11:41:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.479 11:41:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:27.479 Found net devices under 0000:18:00.1: mlx_0_1 00:18:27.479 11:41:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.479 11:41:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:27.479 11:41:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:27.479 11:41:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:27.479 11:41:57 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:27.479 11:41:57 -- nvmf/common.sh@58 -- # uname 00:18:27.479 11:41:57 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:27.479 11:41:57 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:27.479 11:41:57 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:27.479 11:41:57 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:27.479 11:41:57 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:27.479 11:41:57 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:27.479 11:41:57 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:27.479 11:41:57 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:27.479 11:41:57 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:27.479 11:41:57 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:27.479 11:41:57 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:27.479 11:41:57 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:27.479 11:41:57 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:27.479 11:41:57 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:27.479 11:41:57 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:27.479 11:41:57 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:27.479 11:41:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:27.479 11:41:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.479 11:41:57 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:27.479 11:41:57 -- nvmf/common.sh@105 -- # continue 2 00:18:27.479 11:41:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:27.479 11:41:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.479 11:41:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.479 11:41:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:27.479 11:41:57 -- nvmf/common.sh@105 -- # continue 2 00:18:27.479 11:41:57 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:27.479 11:41:57 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:27.479 11:41:57 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:27.479 11:41:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:27.479 11:41:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:27.479 11:41:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:27.479 11:41:57 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:27.479 11:41:57 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:27.479 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:27.479 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:18:27.479 altname enp24s0f0np0 00:18:27.479 altname ens785f0np0 00:18:27.479 inet 192.168.100.8/24 scope global mlx_0_0 00:18:27.479 valid_lft forever preferred_lft forever 00:18:27.479 11:41:57 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:27.479 11:41:57 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:27.479 11:41:57 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:27.479 11:41:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:27.479 11:41:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:27.479 11:41:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:27.479 11:41:57 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:27.479 11:41:57 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:27.479 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:27.479 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:18:27.479 altname enp24s0f1np1 00:18:27.479 altname ens785f1np1 00:18:27.479 inet 192.168.100.9/24 scope global mlx_0_1 00:18:27.479 valid_lft forever preferred_lft forever 00:18:27.479 11:41:57 -- nvmf/common.sh@411 -- # return 0 00:18:27.479 11:41:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:27.479 11:41:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:27.479 11:41:57 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:27.479 11:41:57 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:27.479 11:41:57 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:27.479 11:41:57 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:27.479 11:41:57 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:27.479 11:41:57 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:27.479 11:41:57 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:27.479 11:41:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:27.479 11:41:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.479 11:41:57 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:27.479 11:41:57 -- nvmf/common.sh@105 -- # continue 2 00:18:27.479 11:41:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:27.479 11:41:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.479 11:41:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.479 11:41:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:27.479 11:41:57 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:27.479 11:41:57 -- nvmf/common.sh@105 -- # continue 2 00:18:27.479 11:41:57 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:27.479 11:41:57 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:27.479 11:41:57 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:27.479 11:41:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:27.479 11:41:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:27.479 11:41:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:27.479 11:41:57 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:27.479 11:41:57 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:27.479 11:41:57 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:27.479 11:41:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:27.479 11:41:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:27.479 11:41:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:27.479 11:41:57 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:27.479 192.168.100.9' 00:18:27.479 11:41:57 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:27.479 192.168.100.9' 00:18:27.479 11:41:57 -- nvmf/common.sh@446 -- # head -n 1 00:18:27.479 11:41:57 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:27.479 11:41:57 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:27.479 192.168.100.9' 00:18:27.479 11:41:57 -- nvmf/common.sh@447 -- # head -n 1 00:18:27.479 11:41:57 -- nvmf/common.sh@447 -- # tail -n +2 00:18:27.479 11:41:57 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:27.479 11:41:57 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:27.479 11:41:57 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:27.479 11:41:57 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:27.479 11:41:57 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:27.479 11:41:57 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:27.479 11:41:57 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:18:27.479 11:41:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:27.479 11:41:57 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:27.480 11:41:57 -- common/autotest_common.sh@10 -- # set +x 00:18:27.480 11:41:57 -- nvmf/common.sh@470 -- # nvmfpid=3057644 00:18:27.480 11:41:57 -- nvmf/common.sh@471 -- # waitforlisten 3057644 00:18:27.480 11:41:57 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:27.480 11:41:57 -- common/autotest_common.sh@827 -- # '[' -z 3057644 ']' 00:18:27.480 11:41:57 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.480 11:41:57 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:27.480 11:41:57 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.480 11:41:57 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:27.480 11:41:57 -- common/autotest_common.sh@10 -- # set +x 00:18:27.480 [2024-05-15 11:41:57.789906] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:18:27.480 [2024-05-15 11:41:57.789972] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.480 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.480 [2024-05-15 11:41:57.862539] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:27.480 [2024-05-15 11:41:57.950775] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.480 [2024-05-15 11:41:57.950820] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.480 [2024-05-15 11:41:57.950829] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.480 [2024-05-15 11:41:57.950838] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.480 [2024-05-15 11:41:57.950845] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.480 [2024-05-15 11:41:57.950948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.480 [2024-05-15 11:41:57.951024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:27.480 [2024-05-15 11:41:57.951132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.480 [2024-05-15 11:41:57.951132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:28.048 11:41:58 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:28.048 11:41:58 -- common/autotest_common.sh@860 -- # return 0 00:18:28.048 11:41:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:28.048 11:41:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.048 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:18:28.048 11:41:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.048 11:41:58 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:28.048 11:41:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.048 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:18:28.048 [2024-05-15 11:41:58.681990] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6e41f0/0x6e86e0) succeed. 00:18:28.048 [2024-05-15 11:41:58.692752] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6e5830/0x729d70) succeed. 00:18:28.307 11:41:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.307 11:41:58 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:18:28.307 11:41:58 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:18:28.307 11:41:58 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:28.307 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:18:28.307 11:41:58 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:28.307 11:41:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:28.307 11:41:58 -- target/shutdown.sh@28 -- # cat 00:18:28.307 11:41:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:28.307 11:41:58 -- target/shutdown.sh@28 -- # cat 00:18:28.307 11:41:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:28.307 11:41:58 -- target/shutdown.sh@28 -- # cat 00:18:28.307 11:41:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:28.307 11:41:58 -- target/shutdown.sh@28 -- # cat 00:18:28.307 11:41:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:28.307 11:41:58 -- target/shutdown.sh@28 -- # cat 00:18:28.307 11:41:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:28.307 11:41:58 -- target/shutdown.sh@28 -- # cat 00:18:28.307 11:41:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:28.307 11:41:58 -- target/shutdown.sh@28 -- # cat 00:18:28.307 11:41:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:28.307 11:41:58 -- target/shutdown.sh@28 -- # cat 00:18:28.307 11:41:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:28.307 11:41:58 -- target/shutdown.sh@28 -- # cat 00:18:28.307 11:41:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:28.307 11:41:58 -- target/shutdown.sh@28 -- # cat 00:18:28.307 11:41:58 -- target/shutdown.sh@35 -- # rpc_cmd 00:18:28.307 11:41:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.308 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:18:28.308 Malloc1 00:18:28.308 [2024-05-15 11:41:58.935997] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:28.308 [2024-05-15 11:41:58.936419] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:28.308 Malloc2 00:18:28.308 Malloc3 00:18:28.308 Malloc4 00:18:28.567 Malloc5 00:18:28.567 Malloc6 00:18:28.567 Malloc7 00:18:28.567 Malloc8 00:18:28.567 Malloc9 00:18:28.567 Malloc10 00:18:28.827 11:41:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.827 11:41:59 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:18:28.827 11:41:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.827 11:41:59 -- common/autotest_common.sh@10 -- # set +x 00:18:28.827 11:41:59 -- target/shutdown.sh@78 -- # perfpid=3057945 00:18:28.827 11:41:59 -- target/shutdown.sh@79 -- # waitforlisten 3057945 /var/tmp/bdevperf.sock 00:18:28.827 11:41:59 -- common/autotest_common.sh@827 -- # '[' -z 3057945 ']' 00:18:28.827 11:41:59 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.827 11:41:59 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:28.827 11:41:59 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:18:28.827 11:41:59 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.827 11:41:59 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:28.827 11:41:59 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:28.827 11:41:59 -- common/autotest_common.sh@10 -- # set +x 00:18:28.827 11:41:59 -- nvmf/common.sh@521 -- # config=() 00:18:28.827 11:41:59 -- nvmf/common.sh@521 -- # local subsystem config 00:18:28.827 11:41:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:28.827 11:41:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:28.827 { 00:18:28.827 "params": { 00:18:28.827 "name": "Nvme$subsystem", 00:18:28.827 "trtype": "$TEST_TRANSPORT", 00:18:28.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.827 "adrfam": "ipv4", 00:18:28.827 "trsvcid": "$NVMF_PORT", 00:18:28.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.827 "hdgst": ${hdgst:-false}, 00:18:28.827 "ddgst": ${ddgst:-false} 00:18:28.827 }, 00:18:28.827 "method": "bdev_nvme_attach_controller" 00:18:28.827 } 00:18:28.827 EOF 00:18:28.827 )") 00:18:28.827 11:41:59 -- nvmf/common.sh@543 -- # cat 00:18:28.827 11:41:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:28.827 11:41:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:28.827 { 00:18:28.827 "params": { 00:18:28.827 "name": "Nvme$subsystem", 00:18:28.827 "trtype": "$TEST_TRANSPORT", 00:18:28.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.827 "adrfam": "ipv4", 00:18:28.827 "trsvcid": "$NVMF_PORT", 00:18:28.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.827 "hdgst": ${hdgst:-false}, 00:18:28.827 "ddgst": ${ddgst:-false} 00:18:28.827 }, 00:18:28.827 "method": "bdev_nvme_attach_controller" 00:18:28.827 } 00:18:28.827 EOF 00:18:28.827 )") 00:18:28.827 11:41:59 -- nvmf/common.sh@543 -- # cat 00:18:28.827 11:41:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:28.827 11:41:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:28.827 { 00:18:28.827 "params": { 00:18:28.827 "name": "Nvme$subsystem", 00:18:28.827 "trtype": "$TEST_TRANSPORT", 00:18:28.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.827 "adrfam": "ipv4", 00:18:28.827 "trsvcid": "$NVMF_PORT", 00:18:28.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.827 "hdgst": ${hdgst:-false}, 00:18:28.827 "ddgst": ${ddgst:-false} 00:18:28.827 }, 00:18:28.827 "method": "bdev_nvme_attach_controller" 00:18:28.827 } 00:18:28.827 EOF 00:18:28.827 )") 00:18:28.827 11:41:59 -- nvmf/common.sh@543 -- # cat 00:18:28.827 11:41:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:28.827 11:41:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:28.827 { 00:18:28.827 "params": { 00:18:28.827 "name": "Nvme$subsystem", 00:18:28.827 "trtype": "$TEST_TRANSPORT", 00:18:28.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.827 "adrfam": "ipv4", 00:18:28.827 "trsvcid": "$NVMF_PORT", 00:18:28.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.827 "hdgst": ${hdgst:-false}, 00:18:28.827 "ddgst": ${ddgst:-false} 00:18:28.827 }, 00:18:28.827 "method": "bdev_nvme_attach_controller" 00:18:28.827 } 00:18:28.827 EOF 00:18:28.827 )") 00:18:28.827 11:41:59 -- nvmf/common.sh@543 -- # cat 00:18:28.827 11:41:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:28.827 11:41:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:28.827 { 00:18:28.827 "params": { 00:18:28.827 "name": "Nvme$subsystem", 00:18:28.827 "trtype": "$TEST_TRANSPORT", 00:18:28.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.827 "adrfam": "ipv4", 00:18:28.827 "trsvcid": "$NVMF_PORT", 00:18:28.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.828 "hdgst": ${hdgst:-false}, 00:18:28.828 "ddgst": ${ddgst:-false} 00:18:28.828 }, 00:18:28.828 "method": "bdev_nvme_attach_controller" 00:18:28.828 } 00:18:28.828 EOF 00:18:28.828 )") 00:18:28.828 11:41:59 -- nvmf/common.sh@543 -- # cat 00:18:28.828 [2024-05-15 11:41:59.445966] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:18:28.828 [2024-05-15 11:41:59.446031] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:28.828 11:41:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:28.828 11:41:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:28.828 { 00:18:28.828 "params": { 00:18:28.828 "name": "Nvme$subsystem", 00:18:28.828 "trtype": "$TEST_TRANSPORT", 00:18:28.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.828 "adrfam": "ipv4", 00:18:28.828 "trsvcid": "$NVMF_PORT", 00:18:28.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.828 "hdgst": ${hdgst:-false}, 00:18:28.828 "ddgst": ${ddgst:-false} 00:18:28.828 }, 00:18:28.828 "method": "bdev_nvme_attach_controller" 00:18:28.828 } 00:18:28.828 EOF 00:18:28.828 )") 00:18:28.828 11:41:59 -- nvmf/common.sh@543 -- # cat 00:18:28.828 11:41:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:28.828 11:41:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:28.828 { 00:18:28.828 "params": { 00:18:28.828 "name": "Nvme$subsystem", 00:18:28.828 "trtype": "$TEST_TRANSPORT", 00:18:28.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.828 "adrfam": "ipv4", 00:18:28.828 "trsvcid": "$NVMF_PORT", 00:18:28.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.828 "hdgst": ${hdgst:-false}, 00:18:28.828 "ddgst": ${ddgst:-false} 00:18:28.828 }, 00:18:28.828 "method": "bdev_nvme_attach_controller" 00:18:28.828 } 00:18:28.828 EOF 00:18:28.828 )") 00:18:28.828 11:41:59 -- nvmf/common.sh@543 -- # cat 00:18:28.828 11:41:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:28.828 11:41:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:28.828 { 00:18:28.828 "params": { 00:18:28.828 "name": "Nvme$subsystem", 00:18:28.828 "trtype": "$TEST_TRANSPORT", 00:18:28.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.828 "adrfam": "ipv4", 00:18:28.828 "trsvcid": "$NVMF_PORT", 00:18:28.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.828 "hdgst": ${hdgst:-false}, 00:18:28.828 "ddgst": ${ddgst:-false} 00:18:28.828 }, 00:18:28.828 "method": "bdev_nvme_attach_controller" 00:18:28.828 } 00:18:28.828 EOF 00:18:28.828 )") 00:18:28.828 11:41:59 -- nvmf/common.sh@543 -- # cat 00:18:28.828 11:41:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:28.828 11:41:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:28.828 { 00:18:28.828 "params": { 00:18:28.828 "name": "Nvme$subsystem", 00:18:28.828 "trtype": "$TEST_TRANSPORT", 00:18:28.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.828 "adrfam": "ipv4", 00:18:28.828 "trsvcid": "$NVMF_PORT", 00:18:28.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.828 "hdgst": ${hdgst:-false}, 00:18:28.828 "ddgst": ${ddgst:-false} 00:18:28.828 }, 00:18:28.828 "method": "bdev_nvme_attach_controller" 00:18:28.828 } 00:18:28.828 EOF 00:18:28.828 )") 00:18:28.828 11:41:59 -- nvmf/common.sh@543 -- # cat 00:18:28.828 11:41:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:28.828 11:41:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:28.828 { 00:18:28.828 "params": { 00:18:28.828 "name": "Nvme$subsystem", 00:18:28.828 "trtype": "$TEST_TRANSPORT", 00:18:28.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.828 "adrfam": "ipv4", 00:18:28.828 "trsvcid": "$NVMF_PORT", 00:18:28.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.828 "hdgst": ${hdgst:-false}, 00:18:28.828 "ddgst": ${ddgst:-false} 00:18:28.828 }, 00:18:28.828 "method": "bdev_nvme_attach_controller" 00:18:28.828 } 00:18:28.828 EOF 00:18:28.828 )") 00:18:28.828 11:41:59 -- nvmf/common.sh@543 -- # cat 00:18:28.828 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.828 11:41:59 -- nvmf/common.sh@545 -- # jq . 00:18:28.828 11:41:59 -- nvmf/common.sh@546 -- # IFS=, 00:18:28.828 11:41:59 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:28.828 "params": { 00:18:28.828 "name": "Nvme1", 00:18:28.828 "trtype": "rdma", 00:18:28.828 "traddr": "192.168.100.8", 00:18:28.828 "adrfam": "ipv4", 00:18:28.828 "trsvcid": "4420", 00:18:28.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.828 "hdgst": false, 00:18:28.828 "ddgst": false 00:18:28.828 }, 00:18:28.828 "method": "bdev_nvme_attach_controller" 00:18:28.828 },{ 00:18:28.828 "params": { 00:18:28.828 "name": "Nvme2", 00:18:28.828 "trtype": "rdma", 00:18:28.828 "traddr": "192.168.100.8", 00:18:28.828 "adrfam": "ipv4", 00:18:28.828 "trsvcid": "4420", 00:18:28.828 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:28.828 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:28.828 "hdgst": false, 00:18:28.828 "ddgst": false 00:18:28.828 }, 00:18:28.828 "method": "bdev_nvme_attach_controller" 00:18:28.828 },{ 00:18:28.828 "params": { 00:18:28.828 "name": "Nvme3", 00:18:28.828 "trtype": "rdma", 00:18:28.828 "traddr": "192.168.100.8", 00:18:28.828 "adrfam": "ipv4", 00:18:28.828 "trsvcid": "4420", 00:18:28.828 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:28.828 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:28.828 "hdgst": false, 00:18:28.828 "ddgst": false 00:18:28.828 }, 00:18:28.828 "method": "bdev_nvme_attach_controller" 00:18:28.828 },{ 00:18:28.828 "params": { 00:18:28.828 "name": "Nvme4", 00:18:28.828 "trtype": "rdma", 00:18:28.828 "traddr": "192.168.100.8", 00:18:28.828 "adrfam": "ipv4", 00:18:28.828 "trsvcid": "4420", 00:18:28.828 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:28.828 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:28.828 "hdgst": false, 00:18:28.828 "ddgst": false 00:18:28.828 }, 00:18:28.828 "method": "bdev_nvme_attach_controller" 00:18:28.828 },{ 00:18:28.828 "params": { 00:18:28.828 "name": "Nvme5", 00:18:28.828 "trtype": "rdma", 00:18:28.828 "traddr": "192.168.100.8", 00:18:28.828 "adrfam": "ipv4", 00:18:28.828 "trsvcid": "4420", 00:18:28.828 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:28.828 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:28.828 "hdgst": false, 00:18:28.828 "ddgst": false 00:18:28.828 }, 00:18:28.828 "method": "bdev_nvme_attach_controller" 00:18:28.828 },{ 00:18:28.828 "params": { 00:18:28.828 "name": "Nvme6", 00:18:28.828 "trtype": "rdma", 00:18:28.828 "traddr": "192.168.100.8", 00:18:28.828 "adrfam": "ipv4", 00:18:28.828 "trsvcid": "4420", 00:18:28.828 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:28.828 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:28.828 "hdgst": false, 00:18:28.828 "ddgst": false 00:18:28.828 }, 00:18:28.828 "method": "bdev_nvme_attach_controller" 00:18:28.828 },{ 00:18:28.828 "params": { 00:18:28.828 "name": "Nvme7", 00:18:28.828 "trtype": "rdma", 00:18:28.828 "traddr": "192.168.100.8", 00:18:28.828 "adrfam": "ipv4", 00:18:28.828 "trsvcid": "4420", 00:18:28.828 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:28.828 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:28.828 "hdgst": false, 00:18:28.828 "ddgst": false 00:18:28.828 }, 00:18:28.828 "method": "bdev_nvme_attach_controller" 00:18:28.828 },{ 00:18:28.828 "params": { 00:18:28.828 "name": "Nvme8", 00:18:28.828 "trtype": "rdma", 00:18:28.829 "traddr": "192.168.100.8", 00:18:28.829 "adrfam": "ipv4", 00:18:28.829 "trsvcid": "4420", 00:18:28.829 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:28.829 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:28.829 "hdgst": false, 00:18:28.829 "ddgst": false 00:18:28.829 }, 00:18:28.829 "method": "bdev_nvme_attach_controller" 00:18:28.829 },{ 00:18:28.829 "params": { 00:18:28.829 "name": "Nvme9", 00:18:28.829 "trtype": "rdma", 00:18:28.829 "traddr": "192.168.100.8", 00:18:28.829 "adrfam": "ipv4", 00:18:28.829 "trsvcid": "4420", 00:18:28.829 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:28.829 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:28.829 "hdgst": false, 00:18:28.829 "ddgst": false 00:18:28.829 }, 00:18:28.829 "method": "bdev_nvme_attach_controller" 00:18:28.829 },{ 00:18:28.829 "params": { 00:18:28.829 "name": "Nvme10", 00:18:28.829 "trtype": "rdma", 00:18:28.829 "traddr": "192.168.100.8", 00:18:28.829 "adrfam": "ipv4", 00:18:28.829 "trsvcid": "4420", 00:18:28.829 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:28.829 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:28.829 "hdgst": false, 00:18:28.829 "ddgst": false 00:18:28.829 }, 00:18:28.829 "method": "bdev_nvme_attach_controller" 00:18:28.829 }' 00:18:28.829 [2024-05-15 11:41:59.524280] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.088 [2024-05-15 11:41:59.607588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.025 11:42:00 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:30.025 11:42:00 -- common/autotest_common.sh@860 -- # return 0 00:18:30.025 11:42:00 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:30.025 11:42:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.025 11:42:00 -- common/autotest_common.sh@10 -- # set +x 00:18:30.025 11:42:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.025 11:42:00 -- target/shutdown.sh@83 -- # kill -9 3057945 00:18:30.025 11:42:00 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:18:30.025 11:42:00 -- target/shutdown.sh@87 -- # sleep 1 00:18:30.964 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3057945 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:18:30.964 11:42:01 -- target/shutdown.sh@88 -- # kill -0 3057644 00:18:30.964 11:42:01 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:30.964 11:42:01 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:30.964 11:42:01 -- nvmf/common.sh@521 -- # config=() 00:18:30.964 11:42:01 -- nvmf/common.sh@521 -- # local subsystem config 00:18:30.964 11:42:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:30.964 11:42:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:30.964 { 00:18:30.964 "params": { 00:18:30.964 "name": "Nvme$subsystem", 00:18:30.964 "trtype": "$TEST_TRANSPORT", 00:18:30.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.964 "adrfam": "ipv4", 00:18:30.964 "trsvcid": "$NVMF_PORT", 00:18:30.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.964 "hdgst": ${hdgst:-false}, 00:18:30.964 "ddgst": ${ddgst:-false} 00:18:30.964 }, 00:18:30.964 "method": "bdev_nvme_attach_controller" 00:18:30.964 } 00:18:30.964 EOF 00:18:30.964 )") 00:18:30.964 11:42:01 -- nvmf/common.sh@543 -- # cat 00:18:30.964 11:42:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:30.964 11:42:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:30.964 { 00:18:30.964 "params": { 00:18:30.964 "name": "Nvme$subsystem", 00:18:30.964 "trtype": "$TEST_TRANSPORT", 00:18:30.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.964 "adrfam": "ipv4", 00:18:30.964 "trsvcid": "$NVMF_PORT", 00:18:30.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.964 "hdgst": ${hdgst:-false}, 00:18:30.964 "ddgst": ${ddgst:-false} 00:18:30.964 }, 00:18:30.964 "method": "bdev_nvme_attach_controller" 00:18:30.964 } 00:18:30.964 EOF 00:18:30.964 )") 00:18:30.964 11:42:01 -- nvmf/common.sh@543 -- # cat 00:18:30.964 11:42:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:30.964 11:42:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:30.964 { 00:18:30.964 "params": { 00:18:30.964 "name": "Nvme$subsystem", 00:18:30.964 "trtype": "$TEST_TRANSPORT", 00:18:30.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.964 "adrfam": "ipv4", 00:18:30.964 "trsvcid": "$NVMF_PORT", 00:18:30.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.965 "hdgst": ${hdgst:-false}, 00:18:30.965 "ddgst": ${ddgst:-false} 00:18:30.965 }, 00:18:30.965 "method": "bdev_nvme_attach_controller" 00:18:30.965 } 00:18:30.965 EOF 00:18:30.965 )") 00:18:30.965 11:42:01 -- nvmf/common.sh@543 -- # cat 00:18:30.965 11:42:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:30.965 11:42:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:30.965 { 00:18:30.965 "params": { 00:18:30.965 "name": "Nvme$subsystem", 00:18:30.965 "trtype": "$TEST_TRANSPORT", 00:18:30.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.965 "adrfam": "ipv4", 00:18:30.965 "trsvcid": "$NVMF_PORT", 00:18:30.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.965 "hdgst": ${hdgst:-false}, 00:18:30.965 "ddgst": ${ddgst:-false} 00:18:30.965 }, 00:18:30.965 "method": "bdev_nvme_attach_controller" 00:18:30.965 } 00:18:30.965 EOF 00:18:30.965 )") 00:18:30.965 11:42:01 -- nvmf/common.sh@543 -- # cat 00:18:30.965 11:42:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:30.965 11:42:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:30.965 { 00:18:30.965 "params": { 00:18:30.965 "name": "Nvme$subsystem", 00:18:30.965 "trtype": "$TEST_TRANSPORT", 00:18:30.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.965 "adrfam": "ipv4", 00:18:30.965 "trsvcid": "$NVMF_PORT", 00:18:30.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.965 "hdgst": ${hdgst:-false}, 00:18:30.965 "ddgst": ${ddgst:-false} 00:18:30.965 }, 00:18:30.965 "method": "bdev_nvme_attach_controller" 00:18:30.965 } 00:18:30.965 EOF 00:18:30.965 )") 00:18:30.965 11:42:01 -- nvmf/common.sh@543 -- # cat 00:18:30.965 11:42:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:30.965 [2024-05-15 11:42:01.523779] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:18:30.965 [2024-05-15 11:42:01.523849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3058368 ] 00:18:30.965 11:42:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:30.965 { 00:18:30.965 "params": { 00:18:30.965 "name": "Nvme$subsystem", 00:18:30.965 "trtype": "$TEST_TRANSPORT", 00:18:30.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.965 "adrfam": "ipv4", 00:18:30.965 "trsvcid": "$NVMF_PORT", 00:18:30.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.965 "hdgst": ${hdgst:-false}, 00:18:30.965 "ddgst": ${ddgst:-false} 00:18:30.965 }, 00:18:30.965 "method": "bdev_nvme_attach_controller" 00:18:30.965 } 00:18:30.965 EOF 00:18:30.965 )") 00:18:30.965 11:42:01 -- nvmf/common.sh@543 -- # cat 00:18:30.965 11:42:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:30.965 11:42:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:30.965 { 00:18:30.965 "params": { 00:18:30.965 "name": "Nvme$subsystem", 00:18:30.965 "trtype": "$TEST_TRANSPORT", 00:18:30.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.965 "adrfam": "ipv4", 00:18:30.965 "trsvcid": "$NVMF_PORT", 00:18:30.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.965 "hdgst": ${hdgst:-false}, 00:18:30.965 "ddgst": ${ddgst:-false} 00:18:30.965 }, 00:18:30.965 "method": "bdev_nvme_attach_controller" 00:18:30.965 } 00:18:30.965 EOF 00:18:30.965 )") 00:18:30.965 11:42:01 -- nvmf/common.sh@543 -- # cat 00:18:30.965 11:42:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:30.965 11:42:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:30.965 { 00:18:30.965 "params": { 00:18:30.965 "name": "Nvme$subsystem", 00:18:30.965 "trtype": "$TEST_TRANSPORT", 00:18:30.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.965 "adrfam": "ipv4", 00:18:30.965 "trsvcid": "$NVMF_PORT", 00:18:30.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.965 "hdgst": ${hdgst:-false}, 00:18:30.965 "ddgst": ${ddgst:-false} 00:18:30.965 }, 00:18:30.965 "method": "bdev_nvme_attach_controller" 00:18:30.965 } 00:18:30.965 EOF 00:18:30.965 )") 00:18:30.965 11:42:01 -- nvmf/common.sh@543 -- # cat 00:18:30.965 11:42:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:30.965 11:42:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:30.965 { 00:18:30.965 "params": { 00:18:30.965 "name": "Nvme$subsystem", 00:18:30.965 "trtype": "$TEST_TRANSPORT", 00:18:30.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.965 "adrfam": "ipv4", 00:18:30.965 "trsvcid": "$NVMF_PORT", 00:18:30.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.965 "hdgst": ${hdgst:-false}, 00:18:30.965 "ddgst": ${ddgst:-false} 00:18:30.965 }, 00:18:30.965 "method": "bdev_nvme_attach_controller" 00:18:30.965 } 00:18:30.965 EOF 00:18:30.965 )") 00:18:30.965 11:42:01 -- nvmf/common.sh@543 -- # cat 00:18:30.965 11:42:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:30.965 11:42:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:30.965 { 00:18:30.965 "params": { 00:18:30.965 "name": "Nvme$subsystem", 00:18:30.965 "trtype": "$TEST_TRANSPORT", 00:18:30.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.965 "adrfam": "ipv4", 00:18:30.965 "trsvcid": "$NVMF_PORT", 00:18:30.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.965 "hdgst": ${hdgst:-false}, 00:18:30.965 "ddgst": ${ddgst:-false} 00:18:30.965 }, 00:18:30.965 "method": "bdev_nvme_attach_controller" 00:18:30.965 } 00:18:30.965 EOF 00:18:30.965 )") 00:18:30.965 11:42:01 -- nvmf/common.sh@543 -- # cat 00:18:30.965 11:42:01 -- nvmf/common.sh@545 -- # jq . 00:18:30.965 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.965 11:42:01 -- nvmf/common.sh@546 -- # IFS=, 00:18:30.966 11:42:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:30.966 "params": { 00:18:30.966 "name": "Nvme1", 00:18:30.966 "trtype": "rdma", 00:18:30.966 "traddr": "192.168.100.8", 00:18:30.966 "adrfam": "ipv4", 00:18:30.966 "trsvcid": "4420", 00:18:30.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.966 "hdgst": false, 00:18:30.966 "ddgst": false 00:18:30.966 }, 00:18:30.966 "method": "bdev_nvme_attach_controller" 00:18:30.966 },{ 00:18:30.966 "params": { 00:18:30.966 "name": "Nvme2", 00:18:30.966 "trtype": "rdma", 00:18:30.966 "traddr": "192.168.100.8", 00:18:30.966 "adrfam": "ipv4", 00:18:30.966 "trsvcid": "4420", 00:18:30.966 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:30.966 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:30.966 "hdgst": false, 00:18:30.966 "ddgst": false 00:18:30.966 }, 00:18:30.966 "method": "bdev_nvme_attach_controller" 00:18:30.966 },{ 00:18:30.966 "params": { 00:18:30.966 "name": "Nvme3", 00:18:30.966 "trtype": "rdma", 00:18:30.966 "traddr": "192.168.100.8", 00:18:30.966 "adrfam": "ipv4", 00:18:30.966 "trsvcid": "4420", 00:18:30.966 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:30.966 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:30.966 "hdgst": false, 00:18:30.966 "ddgst": false 00:18:30.966 }, 00:18:30.966 "method": "bdev_nvme_attach_controller" 00:18:30.966 },{ 00:18:30.966 "params": { 00:18:30.966 "name": "Nvme4", 00:18:30.966 "trtype": "rdma", 00:18:30.966 "traddr": "192.168.100.8", 00:18:30.966 "adrfam": "ipv4", 00:18:30.966 "trsvcid": "4420", 00:18:30.966 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:30.966 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:30.966 "hdgst": false, 00:18:30.966 "ddgst": false 00:18:30.966 }, 00:18:30.966 "method": "bdev_nvme_attach_controller" 00:18:30.966 },{ 00:18:30.966 "params": { 00:18:30.966 "name": "Nvme5", 00:18:30.966 "trtype": "rdma", 00:18:30.966 "traddr": "192.168.100.8", 00:18:30.966 "adrfam": "ipv4", 00:18:30.966 "trsvcid": "4420", 00:18:30.966 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:30.966 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:30.966 "hdgst": false, 00:18:30.966 "ddgst": false 00:18:30.966 }, 00:18:30.966 "method": "bdev_nvme_attach_controller" 00:18:30.966 },{ 00:18:30.966 "params": { 00:18:30.966 "name": "Nvme6", 00:18:30.966 "trtype": "rdma", 00:18:30.966 "traddr": "192.168.100.8", 00:18:30.966 "adrfam": "ipv4", 00:18:30.966 "trsvcid": "4420", 00:18:30.966 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:30.966 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:30.966 "hdgst": false, 00:18:30.966 "ddgst": false 00:18:30.966 }, 00:18:30.966 "method": "bdev_nvme_attach_controller" 00:18:30.966 },{ 00:18:30.966 "params": { 00:18:30.966 "name": "Nvme7", 00:18:30.966 "trtype": "rdma", 00:18:30.966 "traddr": "192.168.100.8", 00:18:30.966 "adrfam": "ipv4", 00:18:30.966 "trsvcid": "4420", 00:18:30.966 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:30.966 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:30.966 "hdgst": false, 00:18:30.966 "ddgst": false 00:18:30.966 }, 00:18:30.966 "method": "bdev_nvme_attach_controller" 00:18:30.966 },{ 00:18:30.966 "params": { 00:18:30.966 "name": "Nvme8", 00:18:30.966 "trtype": "rdma", 00:18:30.966 "traddr": "192.168.100.8", 00:18:30.966 "adrfam": "ipv4", 00:18:30.966 "trsvcid": "4420", 00:18:30.966 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:30.966 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:30.966 "hdgst": false, 00:18:30.966 "ddgst": false 00:18:30.966 }, 00:18:30.966 "method": "bdev_nvme_attach_controller" 00:18:30.966 },{ 00:18:30.966 "params": { 00:18:30.966 "name": "Nvme9", 00:18:30.966 "trtype": "rdma", 00:18:30.966 "traddr": "192.168.100.8", 00:18:30.966 "adrfam": "ipv4", 00:18:30.966 "trsvcid": "4420", 00:18:30.966 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:30.966 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:30.966 "hdgst": false, 00:18:30.966 "ddgst": false 00:18:30.966 }, 00:18:30.966 "method": "bdev_nvme_attach_controller" 00:18:30.966 },{ 00:18:30.966 "params": { 00:18:30.966 "name": "Nvme10", 00:18:30.966 "trtype": "rdma", 00:18:30.966 "traddr": "192.168.100.8", 00:18:30.966 "adrfam": "ipv4", 00:18:30.966 "trsvcid": "4420", 00:18:30.966 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:30.966 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:30.966 "hdgst": false, 00:18:30.966 "ddgst": false 00:18:30.966 }, 00:18:30.966 "method": "bdev_nvme_attach_controller" 00:18:30.966 }' 00:18:30.966 [2024-05-15 11:42:01.601438] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.966 [2024-05-15 11:42:01.683212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.902 Running I/O for 1 seconds... 00:18:33.281 00:18:33.281 Latency(us) 00:18:33.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.281 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.281 Verification LBA range: start 0x0 length 0x400 00:18:33.281 Nvme1n1 : 1.16 373.34 23.33 0.00 0.00 168756.55 9459.98 235245.75 00:18:33.281 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.281 Verification LBA range: start 0x0 length 0x400 00:18:33.281 Nvme2n1 : 1.16 385.80 24.11 0.00 0.00 161306.20 10086.85 165948.55 00:18:33.281 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.281 Verification LBA range: start 0x0 length 0x400 00:18:33.281 Nvme3n1 : 1.16 385.40 24.09 0.00 0.00 158908.26 10428.77 159565.91 00:18:33.281 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.281 Verification LBA range: start 0x0 length 0x400 00:18:33.281 Nvme4n1 : 1.16 393.61 24.60 0.00 0.00 153465.73 6040.71 148624.25 00:18:33.281 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.281 Verification LBA range: start 0x0 length 0x400 00:18:33.281 Nvme5n1 : 1.17 384.54 24.03 0.00 0.00 155295.01 11169.61 142241.61 00:18:33.281 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.281 Verification LBA range: start 0x0 length 0x400 00:18:33.281 Nvme6n1 : 1.17 384.16 24.01 0.00 0.00 152866.03 11511.54 134947.17 00:18:33.281 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.281 Verification LBA range: start 0x0 length 0x400 00:18:33.281 Nvme7n1 : 1.17 383.77 23.99 0.00 0.00 150845.12 11568.53 128564.54 00:18:33.281 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.281 Verification LBA range: start 0x0 length 0x400 00:18:33.281 Nvme8n1 : 1.17 388.52 24.28 0.00 0.00 146817.77 4843.97 121270.09 00:18:33.281 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.281 Verification LBA range: start 0x0 length 0x400 00:18:33.281 Nvme9n1 : 1.17 382.15 23.88 0.00 0.00 147269.42 2835.14 109872.53 00:18:33.281 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.281 Verification LBA range: start 0x0 length 0x400 00:18:33.281 Nvme10n1 : 1.16 276.24 17.26 0.00 0.00 201130.87 8890.10 333720.71 00:18:33.281 =================================================================================================================== 00:18:33.281 Total : 3737.53 233.60 0.00 0.00 158383.55 2835.14 333720.71 00:18:33.541 11:42:04 -- target/shutdown.sh@94 -- # stoptarget 00:18:33.541 11:42:04 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:18:33.541 11:42:04 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:33.541 11:42:04 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:33.541 11:42:04 -- target/shutdown.sh@45 -- # nvmftestfini 00:18:33.541 11:42:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:33.541 11:42:04 -- nvmf/common.sh@117 -- # sync 00:18:33.541 11:42:04 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:33.541 11:42:04 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:33.541 11:42:04 -- nvmf/common.sh@120 -- # set +e 00:18:33.541 11:42:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:33.541 11:42:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:33.541 rmmod nvme_rdma 00:18:33.541 rmmod nvme_fabrics 00:18:33.541 11:42:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:33.541 11:42:04 -- nvmf/common.sh@124 -- # set -e 00:18:33.541 11:42:04 -- nvmf/common.sh@125 -- # return 0 00:18:33.541 11:42:04 -- nvmf/common.sh@478 -- # '[' -n 3057644 ']' 00:18:33.541 11:42:04 -- nvmf/common.sh@479 -- # killprocess 3057644 00:18:33.541 11:42:04 -- common/autotest_common.sh@946 -- # '[' -z 3057644 ']' 00:18:33.541 11:42:04 -- common/autotest_common.sh@950 -- # kill -0 3057644 00:18:33.541 11:42:04 -- common/autotest_common.sh@951 -- # uname 00:18:33.541 11:42:04 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:33.541 11:42:04 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3057644 00:18:33.541 11:42:04 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:33.541 11:42:04 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:33.541 11:42:04 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3057644' 00:18:33.541 killing process with pid 3057644 00:18:33.541 11:42:04 -- common/autotest_common.sh@965 -- # kill 3057644 00:18:33.541 [2024-05-15 11:42:04.179266] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:33.541 11:42:04 -- common/autotest_common.sh@970 -- # wait 3057644 00:18:33.541 [2024-05-15 11:42:04.265655] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:18:34.110 11:42:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:34.110 11:42:04 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:34.110 00:18:34.110 real 0m13.111s 00:18:34.110 user 0m31.552s 00:18:34.110 sys 0m5.883s 00:18:34.110 11:42:04 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:34.110 11:42:04 -- common/autotest_common.sh@10 -- # set +x 00:18:34.110 ************************************ 00:18:34.110 END TEST nvmf_shutdown_tc1 00:18:34.110 ************************************ 00:18:34.110 11:42:04 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:18:34.110 11:42:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:34.110 11:42:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:34.110 11:42:04 -- common/autotest_common.sh@10 -- # set +x 00:18:34.110 ************************************ 00:18:34.110 START TEST nvmf_shutdown_tc2 00:18:34.110 ************************************ 00:18:34.110 11:42:04 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:18:34.110 11:42:04 -- target/shutdown.sh@99 -- # starttarget 00:18:34.110 11:42:04 -- target/shutdown.sh@15 -- # nvmftestinit 00:18:34.110 11:42:04 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:34.110 11:42:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.110 11:42:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:34.110 11:42:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:34.110 11:42:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:34.110 11:42:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.110 11:42:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.110 11:42:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.110 11:42:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:34.110 11:42:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:34.110 11:42:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:34.110 11:42:04 -- common/autotest_common.sh@10 -- # set +x 00:18:34.110 11:42:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:34.110 11:42:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:34.110 11:42:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:34.110 11:42:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:34.110 11:42:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:34.110 11:42:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:34.110 11:42:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:34.110 11:42:04 -- nvmf/common.sh@295 -- # net_devs=() 00:18:34.110 11:42:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:34.110 11:42:04 -- nvmf/common.sh@296 -- # e810=() 00:18:34.110 11:42:04 -- nvmf/common.sh@296 -- # local -ga e810 00:18:34.110 11:42:04 -- nvmf/common.sh@297 -- # x722=() 00:18:34.110 11:42:04 -- nvmf/common.sh@297 -- # local -ga x722 00:18:34.110 11:42:04 -- nvmf/common.sh@298 -- # mlx=() 00:18:34.110 11:42:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:34.110 11:42:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:34.110 11:42:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:34.110 11:42:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:34.110 11:42:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:34.110 11:42:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:34.110 11:42:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:34.110 11:42:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:34.110 11:42:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:34.110 11:42:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:34.110 11:42:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:34.110 11:42:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:34.110 11:42:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:34.110 11:42:04 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:34.110 11:42:04 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:34.110 11:42:04 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:34.110 11:42:04 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:34.110 11:42:04 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:34.110 11:42:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:34.110 11:42:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:34.110 11:42:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:34.110 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:34.110 11:42:04 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:34.110 11:42:04 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:34.110 11:42:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:34.110 11:42:04 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:34.110 11:42:04 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:34.110 11:42:04 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:34.110 11:42:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:34.110 11:42:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:34.110 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:34.110 11:42:04 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:34.111 11:42:04 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:34.111 11:42:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:34.111 11:42:04 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:34.111 11:42:04 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:34.111 11:42:04 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:34.111 11:42:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:34.111 11:42:04 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:34.111 11:42:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:34.111 11:42:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.111 11:42:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:34.111 11:42:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.111 11:42:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:34.111 Found net devices under 0000:18:00.0: mlx_0_0 00:18:34.111 11:42:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.111 11:42:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:34.111 11:42:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.111 11:42:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:34.111 11:42:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.111 11:42:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:34.111 Found net devices under 0000:18:00.1: mlx_0_1 00:18:34.111 11:42:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.111 11:42:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:34.111 11:42:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:34.111 11:42:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:34.111 11:42:04 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:34.111 11:42:04 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:34.111 11:42:04 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:34.111 11:42:04 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:34.111 11:42:04 -- nvmf/common.sh@58 -- # uname 00:18:34.111 11:42:04 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:34.111 11:42:04 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:34.111 11:42:04 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:34.111 11:42:04 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:34.111 11:42:04 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:34.371 11:42:04 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:34.371 11:42:04 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:34.371 11:42:04 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:34.371 11:42:04 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:34.371 11:42:04 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:34.371 11:42:04 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:34.371 11:42:04 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:34.371 11:42:04 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:34.371 11:42:04 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:34.371 11:42:04 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:34.371 11:42:04 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:34.371 11:42:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:34.371 11:42:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.371 11:42:04 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:34.371 11:42:04 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:34.371 11:42:04 -- nvmf/common.sh@105 -- # continue 2 00:18:34.371 11:42:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:34.371 11:42:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.371 11:42:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:34.371 11:42:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.371 11:42:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:34.371 11:42:04 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:34.371 11:42:04 -- nvmf/common.sh@105 -- # continue 2 00:18:34.371 11:42:04 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:34.371 11:42:04 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:34.371 11:42:04 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:34.371 11:42:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:34.371 11:42:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:34.371 11:42:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:34.371 11:42:04 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:34.371 11:42:04 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:34.371 11:42:04 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:34.371 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:34.371 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:18:34.371 altname enp24s0f0np0 00:18:34.371 altname ens785f0np0 00:18:34.371 inet 192.168.100.8/24 scope global mlx_0_0 00:18:34.371 valid_lft forever preferred_lft forever 00:18:34.371 11:42:04 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:34.371 11:42:04 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:34.371 11:42:04 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:34.371 11:42:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:34.371 11:42:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:34.371 11:42:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:34.371 11:42:04 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:34.371 11:42:04 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:34.371 11:42:04 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:34.371 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:34.371 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:18:34.371 altname enp24s0f1np1 00:18:34.371 altname ens785f1np1 00:18:34.371 inet 192.168.100.9/24 scope global mlx_0_1 00:18:34.371 valid_lft forever preferred_lft forever 00:18:34.371 11:42:04 -- nvmf/common.sh@411 -- # return 0 00:18:34.371 11:42:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:34.371 11:42:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:34.371 11:42:04 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:34.371 11:42:04 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:34.371 11:42:04 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:34.371 11:42:04 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:34.371 11:42:04 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:34.371 11:42:04 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:34.371 11:42:04 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:34.371 11:42:04 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:34.371 11:42:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:34.371 11:42:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.371 11:42:04 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:34.371 11:42:04 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:34.371 11:42:04 -- nvmf/common.sh@105 -- # continue 2 00:18:34.371 11:42:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:34.371 11:42:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.371 11:42:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:34.371 11:42:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.371 11:42:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:34.371 11:42:04 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:34.371 11:42:04 -- nvmf/common.sh@105 -- # continue 2 00:18:34.371 11:42:04 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:34.371 11:42:04 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:34.371 11:42:04 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:34.371 11:42:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:34.371 11:42:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:34.371 11:42:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:34.371 11:42:05 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:34.371 11:42:05 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:34.371 11:42:05 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:34.371 11:42:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:34.371 11:42:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:34.371 11:42:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:34.371 11:42:05 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:34.372 192.168.100.9' 00:18:34.372 11:42:05 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:34.372 192.168.100.9' 00:18:34.372 11:42:05 -- nvmf/common.sh@446 -- # head -n 1 00:18:34.372 11:42:05 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:34.372 11:42:05 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:34.372 192.168.100.9' 00:18:34.372 11:42:05 -- nvmf/common.sh@447 -- # tail -n +2 00:18:34.372 11:42:05 -- nvmf/common.sh@447 -- # head -n 1 00:18:34.372 11:42:05 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:34.372 11:42:05 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:34.372 11:42:05 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:34.372 11:42:05 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:34.372 11:42:05 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:34.372 11:42:05 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:34.372 11:42:05 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:18:34.372 11:42:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:34.372 11:42:05 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:34.372 11:42:05 -- common/autotest_common.sh@10 -- # set +x 00:18:34.372 11:42:05 -- nvmf/common.sh@470 -- # nvmfpid=3059299 00:18:34.372 11:42:05 -- nvmf/common.sh@471 -- # waitforlisten 3059299 00:18:34.372 11:42:05 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:34.372 11:42:05 -- common/autotest_common.sh@827 -- # '[' -z 3059299 ']' 00:18:34.372 11:42:05 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.372 11:42:05 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:34.372 11:42:05 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.372 11:42:05 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:34.372 11:42:05 -- common/autotest_common.sh@10 -- # set +x 00:18:34.372 [2024-05-15 11:42:05.117218] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:18:34.372 [2024-05-15 11:42:05.117276] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.631 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.631 [2024-05-15 11:42:05.191517] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:34.631 [2024-05-15 11:42:05.279652] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.631 [2024-05-15 11:42:05.279694] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.631 [2024-05-15 11:42:05.279704] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.631 [2024-05-15 11:42:05.279713] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.631 [2024-05-15 11:42:05.279720] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.631 [2024-05-15 11:42:05.279983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.631 [2024-05-15 11:42:05.280047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:34.631 [2024-05-15 11:42:05.280361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.631 [2024-05-15 11:42:05.280362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:35.200 11:42:05 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:35.200 11:42:05 -- common/autotest_common.sh@860 -- # return 0 00:18:35.200 11:42:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:35.200 11:42:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.200 11:42:05 -- common/autotest_common.sh@10 -- # set +x 00:18:35.461 11:42:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.461 11:42:05 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:35.461 11:42:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.461 11:42:05 -- common/autotest_common.sh@10 -- # set +x 00:18:35.461 [2024-05-15 11:42:06.002845] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfc81f0/0xfcc6e0) succeed. 00:18:35.461 [2024-05-15 11:42:06.013439] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfc9830/0x100dd70) succeed. 00:18:35.461 11:42:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.461 11:42:06 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:18:35.461 11:42:06 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:18:35.461 11:42:06 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:35.461 11:42:06 -- common/autotest_common.sh@10 -- # set +x 00:18:35.461 11:42:06 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:35.461 11:42:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:35.461 11:42:06 -- target/shutdown.sh@28 -- # cat 00:18:35.461 11:42:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:35.461 11:42:06 -- target/shutdown.sh@28 -- # cat 00:18:35.461 11:42:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:35.461 11:42:06 -- target/shutdown.sh@28 -- # cat 00:18:35.461 11:42:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:35.461 11:42:06 -- target/shutdown.sh@28 -- # cat 00:18:35.461 11:42:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:35.461 11:42:06 -- target/shutdown.sh@28 -- # cat 00:18:35.461 11:42:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:35.461 11:42:06 -- target/shutdown.sh@28 -- # cat 00:18:35.461 11:42:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:35.461 11:42:06 -- target/shutdown.sh@28 -- # cat 00:18:35.461 11:42:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:35.461 11:42:06 -- target/shutdown.sh@28 -- # cat 00:18:35.461 11:42:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:35.461 11:42:06 -- target/shutdown.sh@28 -- # cat 00:18:35.461 11:42:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:35.461 11:42:06 -- target/shutdown.sh@28 -- # cat 00:18:35.461 11:42:06 -- target/shutdown.sh@35 -- # rpc_cmd 00:18:35.461 11:42:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.461 11:42:06 -- common/autotest_common.sh@10 -- # set +x 00:18:35.461 Malloc1 00:18:35.720 [2024-05-15 11:42:06.246429] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:35.720 [2024-05-15 11:42:06.246824] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:35.720 Malloc2 00:18:35.720 Malloc3 00:18:35.720 Malloc4 00:18:35.720 Malloc5 00:18:35.720 Malloc6 00:18:35.980 Malloc7 00:18:35.980 Malloc8 00:18:35.980 Malloc9 00:18:35.980 Malloc10 00:18:35.980 11:42:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.980 11:42:06 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:18:35.980 11:42:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.980 11:42:06 -- common/autotest_common.sh@10 -- # set +x 00:18:35.980 11:42:06 -- target/shutdown.sh@103 -- # perfpid=3059635 00:18:35.980 11:42:06 -- target/shutdown.sh@104 -- # waitforlisten 3059635 /var/tmp/bdevperf.sock 00:18:35.980 11:42:06 -- common/autotest_common.sh@827 -- # '[' -z 3059635 ']' 00:18:35.980 11:42:06 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.980 11:42:06 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:35.980 11:42:06 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:35.980 11:42:06 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:35.980 11:42:06 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.980 11:42:06 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:35.980 11:42:06 -- nvmf/common.sh@521 -- # config=() 00:18:35.980 11:42:06 -- common/autotest_common.sh@10 -- # set +x 00:18:35.980 11:42:06 -- nvmf/common.sh@521 -- # local subsystem config 00:18:35.980 11:42:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:35.980 11:42:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:35.980 { 00:18:35.980 "params": { 00:18:35.980 "name": "Nvme$subsystem", 00:18:35.980 "trtype": "$TEST_TRANSPORT", 00:18:35.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:35.980 "adrfam": "ipv4", 00:18:35.980 "trsvcid": "$NVMF_PORT", 00:18:35.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:35.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:35.980 "hdgst": ${hdgst:-false}, 00:18:35.980 "ddgst": ${ddgst:-false} 00:18:35.980 }, 00:18:35.980 "method": "bdev_nvme_attach_controller" 00:18:35.980 } 00:18:35.980 EOF 00:18:35.980 )") 00:18:35.980 11:42:06 -- nvmf/common.sh@543 -- # cat 00:18:35.980 11:42:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:35.980 11:42:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:35.980 { 00:18:35.980 "params": { 00:18:35.980 "name": "Nvme$subsystem", 00:18:35.980 "trtype": "$TEST_TRANSPORT", 00:18:35.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:35.980 "adrfam": "ipv4", 00:18:35.980 "trsvcid": "$NVMF_PORT", 00:18:35.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:35.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:35.980 "hdgst": ${hdgst:-false}, 00:18:35.980 "ddgst": ${ddgst:-false} 00:18:35.980 }, 00:18:35.980 "method": "bdev_nvme_attach_controller" 00:18:35.980 } 00:18:35.980 EOF 00:18:35.980 )") 00:18:35.980 11:42:06 -- nvmf/common.sh@543 -- # cat 00:18:35.980 11:42:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:35.980 11:42:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:35.980 { 00:18:35.980 "params": { 00:18:35.980 "name": "Nvme$subsystem", 00:18:35.980 "trtype": "$TEST_TRANSPORT", 00:18:35.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:35.980 "adrfam": "ipv4", 00:18:35.980 "trsvcid": "$NVMF_PORT", 00:18:35.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:35.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:35.980 "hdgst": ${hdgst:-false}, 00:18:35.980 "ddgst": ${ddgst:-false} 00:18:35.980 }, 00:18:35.980 "method": "bdev_nvme_attach_controller" 00:18:35.980 } 00:18:35.980 EOF 00:18:35.980 )") 00:18:35.980 11:42:06 -- nvmf/common.sh@543 -- # cat 00:18:35.980 11:42:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:35.980 11:42:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:35.980 { 00:18:35.980 "params": { 00:18:35.980 "name": "Nvme$subsystem", 00:18:35.980 "trtype": "$TEST_TRANSPORT", 00:18:35.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:35.980 "adrfam": "ipv4", 00:18:35.980 "trsvcid": "$NVMF_PORT", 00:18:35.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:35.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:35.980 "hdgst": ${hdgst:-false}, 00:18:35.980 "ddgst": ${ddgst:-false} 00:18:35.980 }, 00:18:35.980 "method": "bdev_nvme_attach_controller" 00:18:35.980 } 00:18:35.980 EOF 00:18:35.980 )") 00:18:35.980 11:42:06 -- nvmf/common.sh@543 -- # cat 00:18:35.980 11:42:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:35.980 11:42:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:35.980 { 00:18:35.980 "params": { 00:18:35.980 "name": "Nvme$subsystem", 00:18:35.980 "trtype": "$TEST_TRANSPORT", 00:18:35.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:35.980 "adrfam": "ipv4", 00:18:35.980 "trsvcid": "$NVMF_PORT", 00:18:35.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:35.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:35.980 "hdgst": ${hdgst:-false}, 00:18:35.980 "ddgst": ${ddgst:-false} 00:18:35.980 }, 00:18:35.980 "method": "bdev_nvme_attach_controller" 00:18:35.980 } 00:18:35.980 EOF 00:18:35.980 )") 00:18:35.980 11:42:06 -- nvmf/common.sh@543 -- # cat 00:18:36.239 11:42:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:36.239 11:42:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:36.239 { 00:18:36.239 "params": { 00:18:36.239 "name": "Nvme$subsystem", 00:18:36.239 "trtype": "$TEST_TRANSPORT", 00:18:36.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:36.239 "adrfam": "ipv4", 00:18:36.239 "trsvcid": "$NVMF_PORT", 00:18:36.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:36.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:36.239 "hdgst": ${hdgst:-false}, 00:18:36.239 "ddgst": ${ddgst:-false} 00:18:36.239 }, 00:18:36.239 "method": "bdev_nvme_attach_controller" 00:18:36.239 } 00:18:36.239 EOF 00:18:36.239 )") 00:18:36.239 [2024-05-15 11:42:06.749198] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:18:36.239 [2024-05-15 11:42:06.749257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3059635 ] 00:18:36.239 11:42:06 -- nvmf/common.sh@543 -- # cat 00:18:36.239 11:42:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:36.239 11:42:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:36.239 { 00:18:36.239 "params": { 00:18:36.239 "name": "Nvme$subsystem", 00:18:36.239 "trtype": "$TEST_TRANSPORT", 00:18:36.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:36.239 "adrfam": "ipv4", 00:18:36.239 "trsvcid": "$NVMF_PORT", 00:18:36.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:36.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:36.239 "hdgst": ${hdgst:-false}, 00:18:36.239 "ddgst": ${ddgst:-false} 00:18:36.239 }, 00:18:36.239 "method": "bdev_nvme_attach_controller" 00:18:36.239 } 00:18:36.239 EOF 00:18:36.239 )") 00:18:36.239 11:42:06 -- nvmf/common.sh@543 -- # cat 00:18:36.239 11:42:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:36.239 11:42:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:36.239 { 00:18:36.239 "params": { 00:18:36.239 "name": "Nvme$subsystem", 00:18:36.239 "trtype": "$TEST_TRANSPORT", 00:18:36.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:36.239 "adrfam": "ipv4", 00:18:36.239 "trsvcid": "$NVMF_PORT", 00:18:36.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:36.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:36.239 "hdgst": ${hdgst:-false}, 00:18:36.239 "ddgst": ${ddgst:-false} 00:18:36.239 }, 00:18:36.239 "method": "bdev_nvme_attach_controller" 00:18:36.239 } 00:18:36.239 EOF 00:18:36.239 )") 00:18:36.239 11:42:06 -- nvmf/common.sh@543 -- # cat 00:18:36.239 11:42:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:36.239 11:42:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:36.239 { 00:18:36.239 "params": { 00:18:36.239 "name": "Nvme$subsystem", 00:18:36.239 "trtype": "$TEST_TRANSPORT", 00:18:36.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:36.239 "adrfam": "ipv4", 00:18:36.239 "trsvcid": "$NVMF_PORT", 00:18:36.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:36.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:36.239 "hdgst": ${hdgst:-false}, 00:18:36.239 "ddgst": ${ddgst:-false} 00:18:36.239 }, 00:18:36.239 "method": "bdev_nvme_attach_controller" 00:18:36.239 } 00:18:36.239 EOF 00:18:36.239 )") 00:18:36.239 11:42:06 -- nvmf/common.sh@543 -- # cat 00:18:36.239 11:42:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:36.239 11:42:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:36.239 { 00:18:36.239 "params": { 00:18:36.239 "name": "Nvme$subsystem", 00:18:36.239 "trtype": "$TEST_TRANSPORT", 00:18:36.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:36.239 "adrfam": "ipv4", 00:18:36.239 "trsvcid": "$NVMF_PORT", 00:18:36.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:36.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:36.239 "hdgst": ${hdgst:-false}, 00:18:36.239 "ddgst": ${ddgst:-false} 00:18:36.239 }, 00:18:36.239 "method": "bdev_nvme_attach_controller" 00:18:36.239 } 00:18:36.239 EOF 00:18:36.239 )") 00:18:36.239 11:42:06 -- nvmf/common.sh@543 -- # cat 00:18:36.239 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.239 11:42:06 -- nvmf/common.sh@545 -- # jq . 00:18:36.239 11:42:06 -- nvmf/common.sh@546 -- # IFS=, 00:18:36.239 11:42:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:36.239 "params": { 00:18:36.240 "name": "Nvme1", 00:18:36.240 "trtype": "rdma", 00:18:36.240 "traddr": "192.168.100.8", 00:18:36.240 "adrfam": "ipv4", 00:18:36.240 "trsvcid": "4420", 00:18:36.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.240 "hdgst": false, 00:18:36.240 "ddgst": false 00:18:36.240 }, 00:18:36.240 "method": "bdev_nvme_attach_controller" 00:18:36.240 },{ 00:18:36.240 "params": { 00:18:36.240 "name": "Nvme2", 00:18:36.240 "trtype": "rdma", 00:18:36.240 "traddr": "192.168.100.8", 00:18:36.240 "adrfam": "ipv4", 00:18:36.240 "trsvcid": "4420", 00:18:36.240 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:36.240 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:36.240 "hdgst": false, 00:18:36.240 "ddgst": false 00:18:36.240 }, 00:18:36.240 "method": "bdev_nvme_attach_controller" 00:18:36.240 },{ 00:18:36.240 "params": { 00:18:36.240 "name": "Nvme3", 00:18:36.240 "trtype": "rdma", 00:18:36.240 "traddr": "192.168.100.8", 00:18:36.240 "adrfam": "ipv4", 00:18:36.240 "trsvcid": "4420", 00:18:36.240 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:36.240 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:36.240 "hdgst": false, 00:18:36.240 "ddgst": false 00:18:36.240 }, 00:18:36.240 "method": "bdev_nvme_attach_controller" 00:18:36.240 },{ 00:18:36.240 "params": { 00:18:36.240 "name": "Nvme4", 00:18:36.240 "trtype": "rdma", 00:18:36.240 "traddr": "192.168.100.8", 00:18:36.240 "adrfam": "ipv4", 00:18:36.240 "trsvcid": "4420", 00:18:36.240 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:36.240 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:36.240 "hdgst": false, 00:18:36.240 "ddgst": false 00:18:36.240 }, 00:18:36.240 "method": "bdev_nvme_attach_controller" 00:18:36.240 },{ 00:18:36.240 "params": { 00:18:36.240 "name": "Nvme5", 00:18:36.240 "trtype": "rdma", 00:18:36.240 "traddr": "192.168.100.8", 00:18:36.240 "adrfam": "ipv4", 00:18:36.240 "trsvcid": "4420", 00:18:36.240 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:36.240 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:36.240 "hdgst": false, 00:18:36.240 "ddgst": false 00:18:36.240 }, 00:18:36.240 "method": "bdev_nvme_attach_controller" 00:18:36.240 },{ 00:18:36.240 "params": { 00:18:36.240 "name": "Nvme6", 00:18:36.240 "trtype": "rdma", 00:18:36.240 "traddr": "192.168.100.8", 00:18:36.240 "adrfam": "ipv4", 00:18:36.240 "trsvcid": "4420", 00:18:36.240 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:36.240 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:36.240 "hdgst": false, 00:18:36.240 "ddgst": false 00:18:36.240 }, 00:18:36.240 "method": "bdev_nvme_attach_controller" 00:18:36.240 },{ 00:18:36.240 "params": { 00:18:36.240 "name": "Nvme7", 00:18:36.240 "trtype": "rdma", 00:18:36.240 "traddr": "192.168.100.8", 00:18:36.240 "adrfam": "ipv4", 00:18:36.240 "trsvcid": "4420", 00:18:36.240 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:36.240 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:36.240 "hdgst": false, 00:18:36.240 "ddgst": false 00:18:36.240 }, 00:18:36.240 "method": "bdev_nvme_attach_controller" 00:18:36.240 },{ 00:18:36.240 "params": { 00:18:36.240 "name": "Nvme8", 00:18:36.240 "trtype": "rdma", 00:18:36.240 "traddr": "192.168.100.8", 00:18:36.240 "adrfam": "ipv4", 00:18:36.240 "trsvcid": "4420", 00:18:36.240 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:36.240 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:36.240 "hdgst": false, 00:18:36.240 "ddgst": false 00:18:36.240 }, 00:18:36.240 "method": "bdev_nvme_attach_controller" 00:18:36.240 },{ 00:18:36.240 "params": { 00:18:36.240 "name": "Nvme9", 00:18:36.240 "trtype": "rdma", 00:18:36.240 "traddr": "192.168.100.8", 00:18:36.240 "adrfam": "ipv4", 00:18:36.240 "trsvcid": "4420", 00:18:36.240 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:36.240 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:36.240 "hdgst": false, 00:18:36.240 "ddgst": false 00:18:36.240 }, 00:18:36.240 "method": "bdev_nvme_attach_controller" 00:18:36.240 },{ 00:18:36.240 "params": { 00:18:36.240 "name": "Nvme10", 00:18:36.240 "trtype": "rdma", 00:18:36.240 "traddr": "192.168.100.8", 00:18:36.240 "adrfam": "ipv4", 00:18:36.240 "trsvcid": "4420", 00:18:36.240 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:36.240 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:36.240 "hdgst": false, 00:18:36.240 "ddgst": false 00:18:36.240 }, 00:18:36.240 "method": "bdev_nvme_attach_controller" 00:18:36.240 }' 00:18:36.240 [2024-05-15 11:42:06.824189] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.240 [2024-05-15 11:42:06.906305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.178 Running I/O for 10 seconds... 00:18:37.178 11:42:07 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:37.178 11:42:07 -- common/autotest_common.sh@860 -- # return 0 00:18:37.178 11:42:07 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:37.178 11:42:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.178 11:42:07 -- common/autotest_common.sh@10 -- # set +x 00:18:37.438 11:42:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.438 11:42:07 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:18:37.438 11:42:07 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:37.438 11:42:07 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:18:37.438 11:42:07 -- target/shutdown.sh@57 -- # local ret=1 00:18:37.438 11:42:07 -- target/shutdown.sh@58 -- # local i 00:18:37.438 11:42:07 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:18:37.438 11:42:07 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:37.438 11:42:07 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:37.438 11:42:07 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:37.438 11:42:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.438 11:42:07 -- common/autotest_common.sh@10 -- # set +x 00:18:37.438 11:42:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.438 11:42:08 -- target/shutdown.sh@60 -- # read_io_count=3 00:18:37.438 11:42:08 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:18:37.438 11:42:08 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:37.698 11:42:08 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:37.698 11:42:08 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:37.698 11:42:08 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:37.698 11:42:08 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:37.698 11:42:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.698 11:42:08 -- common/autotest_common.sh@10 -- # set +x 00:18:37.958 11:42:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.958 11:42:08 -- target/shutdown.sh@60 -- # read_io_count=155 00:18:37.958 11:42:08 -- target/shutdown.sh@63 -- # '[' 155 -ge 100 ']' 00:18:37.958 11:42:08 -- target/shutdown.sh@64 -- # ret=0 00:18:37.958 11:42:08 -- target/shutdown.sh@65 -- # break 00:18:37.958 11:42:08 -- target/shutdown.sh@69 -- # return 0 00:18:37.958 11:42:08 -- target/shutdown.sh@110 -- # killprocess 3059635 00:18:37.958 11:42:08 -- common/autotest_common.sh@946 -- # '[' -z 3059635 ']' 00:18:37.958 11:42:08 -- common/autotest_common.sh@950 -- # kill -0 3059635 00:18:37.958 11:42:08 -- common/autotest_common.sh@951 -- # uname 00:18:37.958 11:42:08 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:37.958 11:42:08 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3059635 00:18:37.958 11:42:08 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:37.958 11:42:08 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:37.958 11:42:08 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3059635' 00:18:37.958 killing process with pid 3059635 00:18:37.958 11:42:08 -- common/autotest_common.sh@965 -- # kill 3059635 00:18:37.958 11:42:08 -- common/autotest_common.sh@970 -- # wait 3059635 00:18:37.958 Received shutdown signal, test time was about 0.909701 seconds 00:18:37.958 00:18:37.958 Latency(us) 00:18:37.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.958 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:37.958 Verification LBA range: start 0x0 length 0x400 00:18:37.958 Nvme1n1 : 0.89 313.32 19.58 0.00 0.00 200886.56 7693.36 217009.64 00:18:37.958 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:37.958 Verification LBA range: start 0x0 length 0x400 00:18:37.958 Nvme2n1 : 0.89 320.70 20.04 0.00 0.00 192924.54 7978.30 209715.20 00:18:37.958 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:37.958 Verification LBA range: start 0x0 length 0x400 00:18:37.958 Nvme3n1 : 0.90 357.09 22.32 0.00 0.00 170171.79 5698.78 148624.25 00:18:37.958 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:37.958 Verification LBA range: start 0x0 length 0x400 00:18:37.958 Nvme4n1 : 0.90 356.57 22.29 0.00 0.00 167279.48 8662.15 142241.61 00:18:37.958 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:37.958 Verification LBA range: start 0x0 length 0x400 00:18:37.958 Nvme5n1 : 0.90 355.92 22.25 0.00 0.00 165092.84 9402.99 134035.37 00:18:37.958 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:37.958 Verification LBA range: start 0x0 length 0x400 00:18:37.958 Nvme6n1 : 0.90 355.45 22.22 0.00 0.00 161500.92 9858.89 127652.73 00:18:37.958 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:37.958 Verification LBA range: start 0x0 length 0x400 00:18:37.958 Nvme7n1 : 0.90 354.89 22.18 0.00 0.00 159051.60 10314.80 123093.70 00:18:37.958 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:37.958 Verification LBA range: start 0x0 length 0x400 00:18:37.958 Nvme8n1 : 0.90 354.23 22.14 0.00 0.00 156777.43 11112.63 110784.33 00:18:37.958 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:37.958 Verification LBA range: start 0x0 length 0x400 00:18:37.958 Nvme9n1 : 0.90 353.59 22.10 0.00 0.00 154102.21 12081.42 111240.24 00:18:37.958 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:37.958 Verification LBA range: start 0x0 length 0x400 00:18:37.958 Nvme10n1 : 0.91 281.62 17.60 0.00 0.00 189147.27 3091.59 238892.97 00:18:37.958 =================================================================================================================== 00:18:37.958 Total : 3403.38 212.71 0.00 0.00 170718.80 3091.59 238892.97 00:18:38.527 11:42:09 -- target/shutdown.sh@113 -- # sleep 1 00:18:39.464 11:42:10 -- target/shutdown.sh@114 -- # kill -0 3059299 00:18:39.464 11:42:10 -- target/shutdown.sh@116 -- # stoptarget 00:18:39.464 11:42:10 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:18:39.464 11:42:10 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:39.464 11:42:10 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:39.464 11:42:10 -- target/shutdown.sh@45 -- # nvmftestfini 00:18:39.464 11:42:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:39.464 11:42:10 -- nvmf/common.sh@117 -- # sync 00:18:39.464 11:42:10 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:39.464 11:42:10 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:39.464 11:42:10 -- nvmf/common.sh@120 -- # set +e 00:18:39.464 11:42:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:39.464 11:42:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:39.464 rmmod nvme_rdma 00:18:39.464 rmmod nvme_fabrics 00:18:39.464 11:42:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:39.464 11:42:10 -- nvmf/common.sh@124 -- # set -e 00:18:39.464 11:42:10 -- nvmf/common.sh@125 -- # return 0 00:18:39.464 11:42:10 -- nvmf/common.sh@478 -- # '[' -n 3059299 ']' 00:18:39.464 11:42:10 -- nvmf/common.sh@479 -- # killprocess 3059299 00:18:39.464 11:42:10 -- common/autotest_common.sh@946 -- # '[' -z 3059299 ']' 00:18:39.464 11:42:10 -- common/autotest_common.sh@950 -- # kill -0 3059299 00:18:39.464 11:42:10 -- common/autotest_common.sh@951 -- # uname 00:18:39.464 11:42:10 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:39.464 11:42:10 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3059299 00:18:39.464 11:42:10 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:39.464 11:42:10 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:39.464 11:42:10 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3059299' 00:18:39.464 killing process with pid 3059299 00:18:39.464 11:42:10 -- common/autotest_common.sh@965 -- # kill 3059299 00:18:39.464 [2024-05-15 11:42:10.111159] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:39.464 11:42:10 -- common/autotest_common.sh@970 -- # wait 3059299 00:18:39.464 [2024-05-15 11:42:10.193703] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:18:40.035 11:42:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:40.035 11:42:10 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:40.035 00:18:40.035 real 0m5.786s 00:18:40.035 user 0m23.171s 00:18:40.035 sys 0m1.221s 00:18:40.035 11:42:10 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:40.035 11:42:10 -- common/autotest_common.sh@10 -- # set +x 00:18:40.035 ************************************ 00:18:40.035 END TEST nvmf_shutdown_tc2 00:18:40.035 ************************************ 00:18:40.035 11:42:10 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:18:40.035 11:42:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:40.035 11:42:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:40.035 11:42:10 -- common/autotest_common.sh@10 -- # set +x 00:18:40.035 ************************************ 00:18:40.035 START TEST nvmf_shutdown_tc3 00:18:40.035 ************************************ 00:18:40.035 11:42:10 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:18:40.035 11:42:10 -- target/shutdown.sh@121 -- # starttarget 00:18:40.035 11:42:10 -- target/shutdown.sh@15 -- # nvmftestinit 00:18:40.035 11:42:10 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:40.035 11:42:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.035 11:42:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:40.035 11:42:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:40.035 11:42:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:40.035 11:42:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.035 11:42:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.035 11:42:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.035 11:42:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:40.035 11:42:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:40.035 11:42:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:40.035 11:42:10 -- common/autotest_common.sh@10 -- # set +x 00:18:40.035 11:42:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:40.035 11:42:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:40.035 11:42:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:40.035 11:42:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:40.035 11:42:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:40.035 11:42:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:40.035 11:42:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:40.035 11:42:10 -- nvmf/common.sh@295 -- # net_devs=() 00:18:40.035 11:42:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:40.035 11:42:10 -- nvmf/common.sh@296 -- # e810=() 00:18:40.035 11:42:10 -- nvmf/common.sh@296 -- # local -ga e810 00:18:40.035 11:42:10 -- nvmf/common.sh@297 -- # x722=() 00:18:40.035 11:42:10 -- nvmf/common.sh@297 -- # local -ga x722 00:18:40.035 11:42:10 -- nvmf/common.sh@298 -- # mlx=() 00:18:40.035 11:42:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:40.035 11:42:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.035 11:42:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.035 11:42:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.035 11:42:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.035 11:42:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.035 11:42:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.035 11:42:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.035 11:42:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.035 11:42:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.035 11:42:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.035 11:42:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.035 11:42:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:40.035 11:42:10 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:40.035 11:42:10 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:40.035 11:42:10 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:40.035 11:42:10 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:40.035 11:42:10 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:40.035 11:42:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:40.035 11:42:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.035 11:42:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:40.035 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:40.035 11:42:10 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:40.035 11:42:10 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:40.035 11:42:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:40.035 11:42:10 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:40.035 11:42:10 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:40.035 11:42:10 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:40.035 11:42:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.035 11:42:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:40.035 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:40.035 11:42:10 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:40.035 11:42:10 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:40.035 11:42:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:40.035 11:42:10 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:40.035 11:42:10 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:40.035 11:42:10 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:40.035 11:42:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:40.035 11:42:10 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:40.035 11:42:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.035 11:42:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.035 11:42:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:40.035 11:42:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.035 11:42:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:40.035 Found net devices under 0000:18:00.0: mlx_0_0 00:18:40.035 11:42:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.035 11:42:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.035 11:42:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.035 11:42:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:40.035 11:42:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.035 11:42:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:40.035 Found net devices under 0000:18:00.1: mlx_0_1 00:18:40.035 11:42:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.035 11:42:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:40.035 11:42:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:40.035 11:42:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:40.035 11:42:10 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:40.035 11:42:10 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:40.035 11:42:10 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:40.035 11:42:10 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:40.035 11:42:10 -- nvmf/common.sh@58 -- # uname 00:18:40.035 11:42:10 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:40.035 11:42:10 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:40.035 11:42:10 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:40.035 11:42:10 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:40.035 11:42:10 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:40.035 11:42:10 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:40.035 11:42:10 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:40.035 11:42:10 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:40.035 11:42:10 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:40.035 11:42:10 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:40.035 11:42:10 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:40.035 11:42:10 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:40.035 11:42:10 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:40.035 11:42:10 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:40.035 11:42:10 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:40.295 11:42:10 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:40.295 11:42:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:40.295 11:42:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:40.295 11:42:10 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:40.295 11:42:10 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:40.295 11:42:10 -- nvmf/common.sh@105 -- # continue 2 00:18:40.295 11:42:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:40.295 11:42:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:40.295 11:42:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:40.295 11:42:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:40.295 11:42:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:40.295 11:42:10 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:40.295 11:42:10 -- nvmf/common.sh@105 -- # continue 2 00:18:40.295 11:42:10 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:40.295 11:42:10 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:40.295 11:42:10 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:40.296 11:42:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:40.296 11:42:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:40.296 11:42:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:40.296 11:42:10 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:40.296 11:42:10 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:40.296 11:42:10 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:40.296 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:40.296 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:18:40.296 altname enp24s0f0np0 00:18:40.296 altname ens785f0np0 00:18:40.296 inet 192.168.100.8/24 scope global mlx_0_0 00:18:40.296 valid_lft forever preferred_lft forever 00:18:40.296 11:42:10 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:40.296 11:42:10 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:40.296 11:42:10 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:40.296 11:42:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:40.296 11:42:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:40.296 11:42:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:40.296 11:42:10 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:40.296 11:42:10 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:40.296 11:42:10 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:40.296 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:40.296 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:18:40.296 altname enp24s0f1np1 00:18:40.296 altname ens785f1np1 00:18:40.296 inet 192.168.100.9/24 scope global mlx_0_1 00:18:40.296 valid_lft forever preferred_lft forever 00:18:40.296 11:42:10 -- nvmf/common.sh@411 -- # return 0 00:18:40.296 11:42:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:40.296 11:42:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:40.296 11:42:10 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:40.296 11:42:10 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:40.296 11:42:10 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:40.296 11:42:10 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:40.296 11:42:10 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:40.296 11:42:10 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:40.296 11:42:10 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:40.296 11:42:10 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:40.296 11:42:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:40.296 11:42:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:40.296 11:42:10 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:40.296 11:42:10 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:40.296 11:42:10 -- nvmf/common.sh@105 -- # continue 2 00:18:40.296 11:42:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:40.296 11:42:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:40.296 11:42:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:40.296 11:42:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:40.296 11:42:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:40.296 11:42:10 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:40.296 11:42:10 -- nvmf/common.sh@105 -- # continue 2 00:18:40.296 11:42:10 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:40.296 11:42:10 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:40.296 11:42:10 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:40.296 11:42:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:40.296 11:42:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:40.296 11:42:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:40.296 11:42:10 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:40.296 11:42:10 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:40.296 11:42:10 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:40.296 11:42:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:40.296 11:42:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:40.296 11:42:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:40.296 11:42:10 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:40.296 192.168.100.9' 00:18:40.296 11:42:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:40.296 192.168.100.9' 00:18:40.296 11:42:10 -- nvmf/common.sh@446 -- # head -n 1 00:18:40.296 11:42:10 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:40.296 11:42:10 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:40.296 192.168.100.9' 00:18:40.296 11:42:10 -- nvmf/common.sh@447 -- # tail -n +2 00:18:40.296 11:42:10 -- nvmf/common.sh@447 -- # head -n 1 00:18:40.296 11:42:10 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:40.296 11:42:10 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:40.296 11:42:10 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:40.296 11:42:10 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:40.296 11:42:10 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:40.296 11:42:10 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:40.296 11:42:10 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:18:40.296 11:42:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:40.296 11:42:10 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:40.296 11:42:10 -- common/autotest_common.sh@10 -- # set +x 00:18:40.296 11:42:10 -- nvmf/common.sh@470 -- # nvmfpid=3060297 00:18:40.296 11:42:10 -- nvmf/common.sh@471 -- # waitforlisten 3060297 00:18:40.296 11:42:10 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:40.296 11:42:10 -- common/autotest_common.sh@827 -- # '[' -z 3060297 ']' 00:18:40.296 11:42:10 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.296 11:42:10 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:40.296 11:42:10 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.296 11:42:10 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:40.296 11:42:10 -- common/autotest_common.sh@10 -- # set +x 00:18:40.296 [2024-05-15 11:42:11.031218] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:18:40.296 [2024-05-15 11:42:11.031276] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.556 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.556 [2024-05-15 11:42:11.105350] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:40.556 [2024-05-15 11:42:11.194418] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.556 [2024-05-15 11:42:11.194470] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.556 [2024-05-15 11:42:11.194479] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.556 [2024-05-15 11:42:11.194487] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.556 [2024-05-15 11:42:11.194494] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.556 [2024-05-15 11:42:11.194606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.556 [2024-05-15 11:42:11.194704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:40.556 [2024-05-15 11:42:11.194807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.556 [2024-05-15 11:42:11.194809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:41.208 11:42:11 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:41.208 11:42:11 -- common/autotest_common.sh@860 -- # return 0 00:18:41.208 11:42:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:41.208 11:42:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.208 11:42:11 -- common/autotest_common.sh@10 -- # set +x 00:18:41.208 11:42:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.208 11:42:11 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:41.208 11:42:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.208 11:42:11 -- common/autotest_common.sh@10 -- # set +x 00:18:41.208 [2024-05-15 11:42:11.916480] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbdd1f0/0xbe16e0) succeed. 00:18:41.208 [2024-05-15 11:42:11.927054] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbde830/0xc22d70) succeed. 00:18:41.468 11:42:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.468 11:42:12 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:18:41.468 11:42:12 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:18:41.468 11:42:12 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:41.468 11:42:12 -- common/autotest_common.sh@10 -- # set +x 00:18:41.468 11:42:12 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:41.468 11:42:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:41.468 11:42:12 -- target/shutdown.sh@28 -- # cat 00:18:41.468 11:42:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:41.468 11:42:12 -- target/shutdown.sh@28 -- # cat 00:18:41.468 11:42:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:41.468 11:42:12 -- target/shutdown.sh@28 -- # cat 00:18:41.468 11:42:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:41.468 11:42:12 -- target/shutdown.sh@28 -- # cat 00:18:41.468 11:42:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:41.468 11:42:12 -- target/shutdown.sh@28 -- # cat 00:18:41.468 11:42:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:41.468 11:42:12 -- target/shutdown.sh@28 -- # cat 00:18:41.468 11:42:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:41.468 11:42:12 -- target/shutdown.sh@28 -- # cat 00:18:41.468 11:42:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:41.468 11:42:12 -- target/shutdown.sh@28 -- # cat 00:18:41.468 11:42:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:41.468 11:42:12 -- target/shutdown.sh@28 -- # cat 00:18:41.468 11:42:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:41.468 11:42:12 -- target/shutdown.sh@28 -- # cat 00:18:41.468 11:42:12 -- target/shutdown.sh@35 -- # rpc_cmd 00:18:41.468 11:42:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.468 11:42:12 -- common/autotest_common.sh@10 -- # set +x 00:18:41.468 Malloc1 00:18:41.468 [2024-05-15 11:42:12.152280] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:41.468 [2024-05-15 11:42:12.152686] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:41.468 Malloc2 00:18:41.468 Malloc3 00:18:41.727 Malloc4 00:18:41.728 Malloc5 00:18:41.728 Malloc6 00:18:41.728 Malloc7 00:18:41.728 Malloc8 00:18:41.987 Malloc9 00:18:41.987 Malloc10 00:18:41.987 11:42:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.987 11:42:12 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:18:41.987 11:42:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.987 11:42:12 -- common/autotest_common.sh@10 -- # set +x 00:18:41.987 11:42:12 -- target/shutdown.sh@125 -- # perfpid=3060540 00:18:41.987 11:42:12 -- target/shutdown.sh@126 -- # waitforlisten 3060540 /var/tmp/bdevperf.sock 00:18:41.987 11:42:12 -- common/autotest_common.sh@827 -- # '[' -z 3060540 ']' 00:18:41.988 11:42:12 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.988 11:42:12 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:41.988 11:42:12 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:41.988 11:42:12 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:41.988 11:42:12 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.988 11:42:12 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:41.988 11:42:12 -- nvmf/common.sh@521 -- # config=() 00:18:41.988 11:42:12 -- common/autotest_common.sh@10 -- # set +x 00:18:41.988 11:42:12 -- nvmf/common.sh@521 -- # local subsystem config 00:18:41.988 11:42:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:41.988 11:42:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:41.988 { 00:18:41.988 "params": { 00:18:41.988 "name": "Nvme$subsystem", 00:18:41.988 "trtype": "$TEST_TRANSPORT", 00:18:41.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.988 "adrfam": "ipv4", 00:18:41.988 "trsvcid": "$NVMF_PORT", 00:18:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.988 "hdgst": ${hdgst:-false}, 00:18:41.988 "ddgst": ${ddgst:-false} 00:18:41.988 }, 00:18:41.988 "method": "bdev_nvme_attach_controller" 00:18:41.988 } 00:18:41.988 EOF 00:18:41.988 )") 00:18:41.988 11:42:12 -- nvmf/common.sh@543 -- # cat 00:18:41.988 11:42:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:41.988 11:42:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:41.988 { 00:18:41.988 "params": { 00:18:41.988 "name": "Nvme$subsystem", 00:18:41.988 "trtype": "$TEST_TRANSPORT", 00:18:41.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.988 "adrfam": "ipv4", 00:18:41.988 "trsvcid": "$NVMF_PORT", 00:18:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.988 "hdgst": ${hdgst:-false}, 00:18:41.988 "ddgst": ${ddgst:-false} 00:18:41.988 }, 00:18:41.988 "method": "bdev_nvme_attach_controller" 00:18:41.988 } 00:18:41.988 EOF 00:18:41.988 )") 00:18:41.988 11:42:12 -- nvmf/common.sh@543 -- # cat 00:18:41.988 11:42:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:41.988 11:42:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:41.988 { 00:18:41.988 "params": { 00:18:41.988 "name": "Nvme$subsystem", 00:18:41.988 "trtype": "$TEST_TRANSPORT", 00:18:41.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.988 "adrfam": "ipv4", 00:18:41.988 "trsvcid": "$NVMF_PORT", 00:18:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.988 "hdgst": ${hdgst:-false}, 00:18:41.988 "ddgst": ${ddgst:-false} 00:18:41.988 }, 00:18:41.988 "method": "bdev_nvme_attach_controller" 00:18:41.988 } 00:18:41.988 EOF 00:18:41.988 )") 00:18:41.988 11:42:12 -- nvmf/common.sh@543 -- # cat 00:18:41.988 11:42:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:41.988 11:42:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:41.988 { 00:18:41.988 "params": { 00:18:41.988 "name": "Nvme$subsystem", 00:18:41.988 "trtype": "$TEST_TRANSPORT", 00:18:41.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.988 "adrfam": "ipv4", 00:18:41.988 "trsvcid": "$NVMF_PORT", 00:18:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.988 "hdgst": ${hdgst:-false}, 00:18:41.988 "ddgst": ${ddgst:-false} 00:18:41.988 }, 00:18:41.988 "method": "bdev_nvme_attach_controller" 00:18:41.988 } 00:18:41.988 EOF 00:18:41.988 )") 00:18:41.988 11:42:12 -- nvmf/common.sh@543 -- # cat 00:18:41.988 11:42:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:41.988 11:42:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:41.988 { 00:18:41.988 "params": { 00:18:41.988 "name": "Nvme$subsystem", 00:18:41.988 "trtype": "$TEST_TRANSPORT", 00:18:41.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.988 "adrfam": "ipv4", 00:18:41.988 "trsvcid": "$NVMF_PORT", 00:18:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.988 "hdgst": ${hdgst:-false}, 00:18:41.988 "ddgst": ${ddgst:-false} 00:18:41.988 }, 00:18:41.988 "method": "bdev_nvme_attach_controller" 00:18:41.988 } 00:18:41.988 EOF 00:18:41.988 )") 00:18:41.988 11:42:12 -- nvmf/common.sh@543 -- # cat 00:18:41.988 11:42:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:41.988 11:42:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:41.988 { 00:18:41.988 "params": { 00:18:41.988 "name": "Nvme$subsystem", 00:18:41.988 "trtype": "$TEST_TRANSPORT", 00:18:41.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.988 "adrfam": "ipv4", 00:18:41.988 "trsvcid": "$NVMF_PORT", 00:18:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.988 "hdgst": ${hdgst:-false}, 00:18:41.988 "ddgst": ${ddgst:-false} 00:18:41.988 }, 00:18:41.988 "method": "bdev_nvme_attach_controller" 00:18:41.988 } 00:18:41.988 EOF 00:18:41.988 )") 00:18:41.988 [2024-05-15 11:42:12.659848] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:18:41.988 [2024-05-15 11:42:12.659911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060540 ] 00:18:41.988 11:42:12 -- nvmf/common.sh@543 -- # cat 00:18:41.988 11:42:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:41.988 11:42:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:41.988 { 00:18:41.988 "params": { 00:18:41.988 "name": "Nvme$subsystem", 00:18:41.988 "trtype": "$TEST_TRANSPORT", 00:18:41.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.988 "adrfam": "ipv4", 00:18:41.988 "trsvcid": "$NVMF_PORT", 00:18:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.988 "hdgst": ${hdgst:-false}, 00:18:41.988 "ddgst": ${ddgst:-false} 00:18:41.988 }, 00:18:41.988 "method": "bdev_nvme_attach_controller" 00:18:41.988 } 00:18:41.988 EOF 00:18:41.988 )") 00:18:41.988 11:42:12 -- nvmf/common.sh@543 -- # cat 00:18:41.988 11:42:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:41.988 11:42:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:41.988 { 00:18:41.988 "params": { 00:18:41.988 "name": "Nvme$subsystem", 00:18:41.988 "trtype": "$TEST_TRANSPORT", 00:18:41.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.988 "adrfam": "ipv4", 00:18:41.988 "trsvcid": "$NVMF_PORT", 00:18:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.988 "hdgst": ${hdgst:-false}, 00:18:41.988 "ddgst": ${ddgst:-false} 00:18:41.988 }, 00:18:41.988 "method": "bdev_nvme_attach_controller" 00:18:41.988 } 00:18:41.988 EOF 00:18:41.988 )") 00:18:41.988 11:42:12 -- nvmf/common.sh@543 -- # cat 00:18:41.988 11:42:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:41.988 11:42:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:41.988 { 00:18:41.988 "params": { 00:18:41.988 "name": "Nvme$subsystem", 00:18:41.988 "trtype": "$TEST_TRANSPORT", 00:18:41.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.988 "adrfam": "ipv4", 00:18:41.988 "trsvcid": "$NVMF_PORT", 00:18:41.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.988 "hdgst": ${hdgst:-false}, 00:18:41.988 "ddgst": ${ddgst:-false} 00:18:41.988 }, 00:18:41.988 "method": "bdev_nvme_attach_controller" 00:18:41.988 } 00:18:41.988 EOF 00:18:41.988 )") 00:18:41.989 11:42:12 -- nvmf/common.sh@543 -- # cat 00:18:41.989 11:42:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:41.989 11:42:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:41.989 { 00:18:41.989 "params": { 00:18:41.989 "name": "Nvme$subsystem", 00:18:41.989 "trtype": "$TEST_TRANSPORT", 00:18:41.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.989 "adrfam": "ipv4", 00:18:41.989 "trsvcid": "$NVMF_PORT", 00:18:41.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.989 "hdgst": ${hdgst:-false}, 00:18:41.989 "ddgst": ${ddgst:-false} 00:18:41.989 }, 00:18:41.989 "method": "bdev_nvme_attach_controller" 00:18:41.989 } 00:18:41.989 EOF 00:18:41.989 )") 00:18:41.989 11:42:12 -- nvmf/common.sh@543 -- # cat 00:18:41.989 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.989 11:42:12 -- nvmf/common.sh@545 -- # jq . 00:18:41.989 11:42:12 -- nvmf/common.sh@546 -- # IFS=, 00:18:41.989 11:42:12 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:41.989 "params": { 00:18:41.989 "name": "Nvme1", 00:18:41.989 "trtype": "rdma", 00:18:41.989 "traddr": "192.168.100.8", 00:18:41.989 "adrfam": "ipv4", 00:18:41.989 "trsvcid": "4420", 00:18:41.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.989 "hdgst": false, 00:18:41.989 "ddgst": false 00:18:41.989 }, 00:18:41.989 "method": "bdev_nvme_attach_controller" 00:18:41.989 },{ 00:18:41.989 "params": { 00:18:41.989 "name": "Nvme2", 00:18:41.989 "trtype": "rdma", 00:18:41.989 "traddr": "192.168.100.8", 00:18:41.989 "adrfam": "ipv4", 00:18:41.989 "trsvcid": "4420", 00:18:41.989 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:41.989 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:41.989 "hdgst": false, 00:18:41.989 "ddgst": false 00:18:41.989 }, 00:18:41.989 "method": "bdev_nvme_attach_controller" 00:18:41.989 },{ 00:18:41.989 "params": { 00:18:41.989 "name": "Nvme3", 00:18:41.989 "trtype": "rdma", 00:18:41.989 "traddr": "192.168.100.8", 00:18:41.989 "adrfam": "ipv4", 00:18:41.989 "trsvcid": "4420", 00:18:41.989 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:41.989 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:41.989 "hdgst": false, 00:18:41.989 "ddgst": false 00:18:41.989 }, 00:18:41.989 "method": "bdev_nvme_attach_controller" 00:18:41.989 },{ 00:18:41.989 "params": { 00:18:41.989 "name": "Nvme4", 00:18:41.989 "trtype": "rdma", 00:18:41.989 "traddr": "192.168.100.8", 00:18:41.989 "adrfam": "ipv4", 00:18:41.989 "trsvcid": "4420", 00:18:41.989 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:41.989 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:41.989 "hdgst": false, 00:18:41.989 "ddgst": false 00:18:41.989 }, 00:18:41.989 "method": "bdev_nvme_attach_controller" 00:18:41.989 },{ 00:18:41.989 "params": { 00:18:41.989 "name": "Nvme5", 00:18:41.989 "trtype": "rdma", 00:18:41.989 "traddr": "192.168.100.8", 00:18:41.989 "adrfam": "ipv4", 00:18:41.989 "trsvcid": "4420", 00:18:41.989 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:41.989 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:41.989 "hdgst": false, 00:18:41.989 "ddgst": false 00:18:41.989 }, 00:18:41.989 "method": "bdev_nvme_attach_controller" 00:18:41.989 },{ 00:18:41.989 "params": { 00:18:41.989 "name": "Nvme6", 00:18:41.989 "trtype": "rdma", 00:18:41.989 "traddr": "192.168.100.8", 00:18:41.989 "adrfam": "ipv4", 00:18:41.989 "trsvcid": "4420", 00:18:41.989 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:41.989 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:41.989 "hdgst": false, 00:18:41.989 "ddgst": false 00:18:41.989 }, 00:18:41.989 "method": "bdev_nvme_attach_controller" 00:18:41.989 },{ 00:18:41.989 "params": { 00:18:41.989 "name": "Nvme7", 00:18:41.989 "trtype": "rdma", 00:18:41.989 "traddr": "192.168.100.8", 00:18:41.989 "adrfam": "ipv4", 00:18:41.989 "trsvcid": "4420", 00:18:41.989 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:41.989 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:41.989 "hdgst": false, 00:18:41.989 "ddgst": false 00:18:41.989 }, 00:18:41.989 "method": "bdev_nvme_attach_controller" 00:18:41.989 },{ 00:18:41.989 "params": { 00:18:41.989 "name": "Nvme8", 00:18:41.989 "trtype": "rdma", 00:18:41.989 "traddr": "192.168.100.8", 00:18:41.989 "adrfam": "ipv4", 00:18:41.989 "trsvcid": "4420", 00:18:41.989 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:41.989 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:41.989 "hdgst": false, 00:18:41.989 "ddgst": false 00:18:41.989 }, 00:18:41.989 "method": "bdev_nvme_attach_controller" 00:18:41.989 },{ 00:18:41.989 "params": { 00:18:41.989 "name": "Nvme9", 00:18:41.989 "trtype": "rdma", 00:18:41.989 "traddr": "192.168.100.8", 00:18:41.989 "adrfam": "ipv4", 00:18:41.989 "trsvcid": "4420", 00:18:41.989 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:41.989 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:41.989 "hdgst": false, 00:18:41.989 "ddgst": false 00:18:41.989 }, 00:18:41.989 "method": "bdev_nvme_attach_controller" 00:18:41.989 },{ 00:18:41.989 "params": { 00:18:41.989 "name": "Nvme10", 00:18:41.989 "trtype": "rdma", 00:18:41.989 "traddr": "192.168.100.8", 00:18:41.989 "adrfam": "ipv4", 00:18:41.989 "trsvcid": "4420", 00:18:41.989 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:41.989 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:41.989 "hdgst": false, 00:18:41.989 "ddgst": false 00:18:41.989 }, 00:18:41.989 "method": "bdev_nvme_attach_controller" 00:18:41.989 }' 00:18:41.989 [2024-05-15 11:42:12.737334] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.249 [2024-05-15 11:42:12.821760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.231 Running I/O for 10 seconds... 00:18:43.232 11:42:13 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:43.232 11:42:13 -- common/autotest_common.sh@860 -- # return 0 00:18:43.232 11:42:13 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:43.232 11:42:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.232 11:42:13 -- common/autotest_common.sh@10 -- # set +x 00:18:43.232 11:42:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.232 11:42:13 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:43.232 11:42:13 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:18:43.232 11:42:13 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:43.232 11:42:13 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:18:43.232 11:42:13 -- target/shutdown.sh@57 -- # local ret=1 00:18:43.232 11:42:13 -- target/shutdown.sh@58 -- # local i 00:18:43.232 11:42:13 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:18:43.232 11:42:13 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:43.232 11:42:13 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:43.232 11:42:13 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:43.232 11:42:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.232 11:42:13 -- common/autotest_common.sh@10 -- # set +x 00:18:43.491 11:42:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.491 11:42:14 -- target/shutdown.sh@60 -- # read_io_count=19 00:18:43.491 11:42:14 -- target/shutdown.sh@63 -- # '[' 19 -ge 100 ']' 00:18:43.491 11:42:14 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:43.750 11:42:14 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:43.750 11:42:14 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:43.750 11:42:14 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:43.750 11:42:14 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:43.750 11:42:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.750 11:42:14 -- common/autotest_common.sh@10 -- # set +x 00:18:43.750 11:42:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.750 11:42:14 -- target/shutdown.sh@60 -- # read_io_count=171 00:18:43.750 11:42:14 -- target/shutdown.sh@63 -- # '[' 171 -ge 100 ']' 00:18:43.750 11:42:14 -- target/shutdown.sh@64 -- # ret=0 00:18:43.750 11:42:14 -- target/shutdown.sh@65 -- # break 00:18:43.750 11:42:14 -- target/shutdown.sh@69 -- # return 0 00:18:43.750 11:42:14 -- target/shutdown.sh@135 -- # killprocess 3060297 00:18:43.750 11:42:14 -- common/autotest_common.sh@946 -- # '[' -z 3060297 ']' 00:18:43.750 11:42:14 -- common/autotest_common.sh@950 -- # kill -0 3060297 00:18:43.750 11:42:14 -- common/autotest_common.sh@951 -- # uname 00:18:43.750 11:42:14 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:43.750 11:42:14 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3060297 00:18:43.750 11:42:14 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:43.750 11:42:14 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:43.750 11:42:14 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3060297' 00:18:43.750 killing process with pid 3060297 00:18:43.750 11:42:14 -- common/autotest_common.sh@965 -- # kill 3060297 00:18:43.750 [2024-05-15 11:42:14.494879] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:43.750 11:42:14 -- common/autotest_common.sh@970 -- # wait 3060297 00:18:44.009 [2024-05-15 11:42:14.524201] rdma.c: 845:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 3 00:18:44.009 [2024-05-15 11:42:14.615487] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:18:44.578 11:42:15 -- target/shutdown.sh@136 -- # nvmfpid= 00:18:44.578 11:42:15 -- target/shutdown.sh@139 -- # sleep 1 00:18:44.840 [2024-05-15 11:42:15.537772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.840 [2024-05-15 11:42:15.537815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:26a7b290 sqhd:0000 p:0 m:0 dnr:0 00:18:44.840 [2024-05-15 11:42:15.537845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.840 [2024-05-15 11:42:15.537855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:26a7b290 sqhd:0000 p:0 m:0 dnr:0 00:18:44.840 [2024-05-15 11:42:15.537865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.840 [2024-05-15 11:42:15.537875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:26a7b290 sqhd:0000 p:0 m:0 dnr:0 00:18:44.840 [2024-05-15 11:42:15.537885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.840 [2024-05-15 11:42:15.537894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:26a7b290 sqhd:0000 p:0 m:0 dnr:0 00:18:44.840 [2024-05-15 11:42:15.540209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:44.840 [2024-05-15 11:42:15.540255] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:18:44.840 [2024-05-15 11:42:15.540330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.840 [2024-05-15 11:42:15.540341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.840 [2024-05-15 11:42:15.540352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.840 [2024-05-15 11:42:15.540361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.840 [2024-05-15 11:42:15.540371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.840 [2024-05-15 11:42:15.540380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.840 [2024-05-15 11:42:15.540390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.840 [2024-05-15 11:42:15.540400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.840 [2024-05-15 11:42:15.542234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:44.840 [2024-05-15 11:42:15.542278] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:18:44.840 [2024-05-15 11:42:15.542331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.840 [2024-05-15 11:42:15.542365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.840 [2024-05-15 11:42:15.542398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.840 [2024-05-15 11:42:15.542439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.840 [2024-05-15 11:42:15.542472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.840 [2024-05-15 11:42:15.542504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.840 [2024-05-15 11:42:15.542537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.840 [2024-05-15 11:42:15.542568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.840 [2024-05-15 11:42:15.545382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:44.840 [2024-05-15 11:42:15.545423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:44.840 [2024-05-15 11:42:15.545471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.840 [2024-05-15 11:42:15.545504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.545538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.545569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.545602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.545633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.545665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.545696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.548218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:44.841 [2024-05-15 11:42:15.548258] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:44.841 [2024-05-15 11:42:15.548306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.548339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.548371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.548403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.548435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.548466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.548499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.548530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.550719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:44.841 [2024-05-15 11:42:15.550760] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:18:44.841 [2024-05-15 11:42:15.550809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.550841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.550874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.550906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.550938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.550970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.551002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.551033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.553101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:44.841 [2024-05-15 11:42:15.553141] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:18:44.841 [2024-05-15 11:42:15.553189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.553221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.553254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.553286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.553318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.553350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.553382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.553412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.555712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:44.841 [2024-05-15 11:42:15.555751] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:18:44.841 [2024-05-15 11:42:15.555797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.555830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.555863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.555894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.555934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.555965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.555998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.556029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.558621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:44.841 [2024-05-15 11:42:15.558662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:18:44.841 [2024-05-15 11:42:15.558710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.558743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.558776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.558807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.558840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.558871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.558903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.558934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.561436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:44.841 [2024-05-15 11:42:15.561476] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:18:44.841 [2024-05-15 11:42:15.561530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.561545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.561559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.561573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.561587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.561601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.841 [2024-05-15 11:42:15.561615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.841 [2024-05-15 11:42:15.561628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:19538 cdw0:26a7b290 sqhd:e000 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.563634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:44.842 [2024-05-15 11:42:15.563681] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:18:44.842 [2024-05-15 11:42:15.566051] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257440 was disconnected and freed. reset controller. 00:18:44.842 [2024-05-15 11:42:15.566106] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:44.842 [2024-05-15 11:42:15.568571] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192571c0 was disconnected and freed. reset controller. 00:18:44.842 [2024-05-15 11:42:15.568614] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:44.842 [2024-05-15 11:42:15.571147] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256f40 was disconnected and freed. reset controller. 00:18:44.842 [2024-05-15 11:42:15.571189] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:44.842 [2024-05-15 11:42:15.573875] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256cc0 was disconnected and freed. reset controller. 00:18:44.842 [2024-05-15 11:42:15.573917] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:44.842 [2024-05-15 11:42:15.576292] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256a40 was disconnected and freed. reset controller. 00:18:44.842 [2024-05-15 11:42:15.576311] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:44.842 [2024-05-15 11:42:15.578482] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192567c0 was disconnected and freed. reset controller. 00:18:44.842 [2024-05-15 11:42:15.578524] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:44.842 [2024-05-15 11:42:15.580843] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256540 was disconnected and freed. reset controller. 00:18:44.842 [2024-05-15 11:42:15.580884] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:44.842 [2024-05-15 11:42:15.583268] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192562c0 was disconnected and freed. reset controller. 00:18:44.842 [2024-05-15 11:42:15.583311] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:44.842 [2024-05-15 11:42:15.583486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0dfd80 len:0x10000 key:0x183000 00:18:44.842 [2024-05-15 11:42:15.583524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.583581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cfd00 len:0x10000 key:0x183000 00:18:44.842 [2024-05-15 11:42:15.583596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.583616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bfc80 len:0x10000 key:0x183000 00:18:44.842 [2024-05-15 11:42:15.583630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.583649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0afc00 len:0x10000 key:0x183000 00:18:44.842 [2024-05-15 11:42:15.583663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.583682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09fb80 len:0x10000 key:0x183000 00:18:44.842 [2024-05-15 11:42:15.583700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.583719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08fb00 len:0x10000 key:0x183000 00:18:44.842 [2024-05-15 11:42:15.583733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.583752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07fa80 len:0x10000 key:0x183000 00:18:44.842 [2024-05-15 11:42:15.583766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.583785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06fa00 len:0x10000 key:0x183000 00:18:44.842 [2024-05-15 11:42:15.583799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.583818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f980 len:0x10000 key:0x183000 00:18:44.842 [2024-05-15 11:42:15.583832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.583852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f900 len:0x10000 key:0x183000 00:18:44.842 [2024-05-15 11:42:15.583866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.583885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f880 len:0x10000 key:0x183000 00:18:44.842 [2024-05-15 11:42:15.583899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.583918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f800 len:0x10000 key:0x183000 00:18:44.842 [2024-05-15 11:42:15.583932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.583951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f780 len:0x10000 key:0x183000 00:18:44.842 [2024-05-15 11:42:15.583965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.583983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f700 len:0x10000 key:0x183000 00:18:44.842 [2024-05-15 11:42:15.583998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.584016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aedf780 len:0x10000 key:0x183c00 00:18:44.842 [2024-05-15 11:42:15.584030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.584049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aecf700 len:0x10000 key:0x183c00 00:18:44.842 [2024-05-15 11:42:15.584071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.584090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aebf680 len:0x10000 key:0x183c00 00:18:44.842 [2024-05-15 11:42:15.584104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.584123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aeaf600 len:0x10000 key:0x183c00 00:18:44.842 [2024-05-15 11:42:15.584137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.584156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae9f580 len:0x10000 key:0x183c00 00:18:44.842 [2024-05-15 11:42:15.584170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.584205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae8f500 len:0x10000 key:0x183c00 00:18:44.842 [2024-05-15 11:42:15.584219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.584238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae7f480 len:0x10000 key:0x183c00 00:18:44.842 [2024-05-15 11:42:15.584252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.584271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae6f400 len:0x10000 key:0x183c00 00:18:44.842 [2024-05-15 11:42:15.584285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.584304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae5f380 len:0x10000 key:0x183c00 00:18:44.842 [2024-05-15 11:42:15.584320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.584339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae4f300 len:0x10000 key:0x183c00 00:18:44.842 [2024-05-15 11:42:15.584353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.584372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae3f280 len:0x10000 key:0x183c00 00:18:44.842 [2024-05-15 11:42:15.584386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.584405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae2f200 len:0x10000 key:0x183c00 00:18:44.842 [2024-05-15 11:42:15.584419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.584438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae1f180 len:0x10000 key:0x183c00 00:18:44.842 [2024-05-15 11:42:15.584453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.842 [2024-05-15 11:42:15.584472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f100 len:0x10000 key:0x183c00 00:18:44.843 [2024-05-15 11:42:15.584486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.584505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.584519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.584538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.584552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.584570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.584584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.584603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.584617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.584636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.584650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.584669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.584683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.584701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.584715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.584734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.584748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.584767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.584781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.584799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.584813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.584835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.584849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.584868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.584882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.584901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.584915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.584934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.584948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.584967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.584980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.584999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.585013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.585032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.585047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.585099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.585114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.585133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.585147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.585167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.585181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.585200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.585214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.585235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.585249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.585268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.585282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.585301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.585316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.585335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.585349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.585368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.585382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.585401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.585415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.585434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.585448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.843 [2024-05-15 11:42:15.585467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x183100 00:18:44.843 [2024-05-15 11:42:15.585481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.585500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183100 00:18:44.844 [2024-05-15 11:42:15.585514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.585533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183100 00:18:44.844 [2024-05-15 11:42:15.585547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.585566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183b00 00:18:44.844 [2024-05-15 11:42:15.585580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.585604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5dff80 len:0x10000 key:0x183b00 00:18:44.844 [2024-05-15 11:42:15.585618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.585637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5cff00 len:0x10000 key:0x183b00 00:18:44.844 [2024-05-15 11:42:15.585651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.585671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5bfe80 len:0x10000 key:0x183b00 00:18:44.844 [2024-05-15 11:42:15.585685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.585704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0efe00 len:0x10000 key:0x183000 00:18:44.844 [2024-05-15 11:42:15.585718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32532 cdw0:26a7afe0 sqhd:a7e7 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.588641] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256040 was disconnected and freed. reset controller. 00:18:44.844 [2024-05-15 11:42:15.588662] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:44.844 [2024-05-15 11:42:15.588683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183b00 00:18:44.844 [2024-05-15 11:42:15.588697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.588729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183b00 00:18:44.844 [2024-05-15 11:42:15.588745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.588764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183b00 00:18:44.844 [2024-05-15 11:42:15.588779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.588798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x183b00 00:18:44.844 [2024-05-15 11:42:15.588812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.588832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183b00 00:18:44.844 [2024-05-15 11:42:15.588846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.588865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183b00 00:18:44.844 [2024-05-15 11:42:15.588879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.588898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x183b00 00:18:44.844 [2024-05-15 11:42:15.588915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.588934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x183b00 00:18:44.844 [2024-05-15 11:42:15.588950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.588969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183b00 00:18:44.844 [2024-05-15 11:42:15.588983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183b00 00:18:44.844 [2024-05-15 11:42:15.589017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x183400 00:18:44.844 [2024-05-15 11:42:15.589049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183400 00:18:44.844 [2024-05-15 11:42:15.589113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183400 00:18:44.844 [2024-05-15 11:42:15.589146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183400 00:18:44.844 [2024-05-15 11:42:15.589179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183400 00:18:44.844 [2024-05-15 11:42:15.589212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183400 00:18:44.844 [2024-05-15 11:42:15.589245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183400 00:18:44.844 [2024-05-15 11:42:15.589279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183400 00:18:44.844 [2024-05-15 11:42:15.589315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183400 00:18:44.844 [2024-05-15 11:42:15.589348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183400 00:18:44.844 [2024-05-15 11:42:15.589381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x183400 00:18:44.844 [2024-05-15 11:42:15.589414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183400 00:18:44.844 [2024-05-15 11:42:15.589447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183400 00:18:44.844 [2024-05-15 11:42:15.589480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183400 00:18:44.844 [2024-05-15 11:42:15.589514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183400 00:18:44.844 [2024-05-15 11:42:15.589547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183400 00:18:44.844 [2024-05-15 11:42:15.589580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.844 [2024-05-15 11:42:15.589598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183400 00:18:44.844 [2024-05-15 11:42:15.589612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.589631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183400 00:18:44.845 [2024-05-15 11:42:15.589645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.589664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183400 00:18:44.845 [2024-05-15 11:42:15.589678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.589699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183400 00:18:44.845 [2024-05-15 11:42:15.589713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.589732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183400 00:18:44.845 [2024-05-15 11:42:15.589746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.589765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183400 00:18:44.845 [2024-05-15 11:42:15.589779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.589798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183400 00:18:44.845 [2024-05-15 11:42:15.589812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.589831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183400 00:18:44.845 [2024-05-15 11:42:15.589845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.589864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183400 00:18:44.845 [2024-05-15 11:42:15.589878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.589897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183400 00:18:44.845 [2024-05-15 11:42:15.589911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.589930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183400 00:18:44.845 [2024-05-15 11:42:15.589944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.589963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183400 00:18:44.845 [2024-05-15 11:42:15.589977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.589996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183400 00:18:44.845 [2024-05-15 11:42:15.590010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.590029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183400 00:18:44.845 [2024-05-15 11:42:15.590043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.590069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183400 00:18:44.845 [2024-05-15 11:42:15.590084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.590103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184000 00:18:44.845 [2024-05-15 11:42:15.590117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.590136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x184000 00:18:44.845 [2024-05-15 11:42:15.590150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.590179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x184000 00:18:44.845 [2024-05-15 11:42:15.590194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.590213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x184000 00:18:44.845 [2024-05-15 11:42:15.590227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.590246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184000 00:18:44.845 [2024-05-15 11:42:15.590260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.590280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184000 00:18:44.845 [2024-05-15 11:42:15.590294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.590313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184000 00:18:44.845 [2024-05-15 11:42:15.590328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.590347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x184000 00:18:44.845 [2024-05-15 11:42:15.590361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.590380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x184000 00:18:44.845 [2024-05-15 11:42:15.590395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.590414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x184000 00:18:44.845 [2024-05-15 11:42:15.590428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.590447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184000 00:18:44.845 [2024-05-15 11:42:15.590463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.590482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184000 00:18:44.845 [2024-05-15 11:42:15.590496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.590516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184000 00:18:44.845 [2024-05-15 11:42:15.590530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.590549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184000 00:18:44.845 [2024-05-15 11:42:15.590563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.845 [2024-05-15 11:42:15.590582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184000 00:18:44.845 [2024-05-15 11:42:15.590596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.846 [2024-05-15 11:42:15.590615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x184000 00:18:44.846 [2024-05-15 11:42:15.590629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.846 [2024-05-15 11:42:15.590648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x184000 00:18:44.846 [2024-05-15 11:42:15.590662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.846 [2024-05-15 11:42:15.590681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184000 00:18:44.846 [2024-05-15 11:42:15.590695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.846 [2024-05-15 11:42:15.590714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x184000 00:18:44.846 [2024-05-15 11:42:15.590728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.846 [2024-05-15 11:42:15.590747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8bf680 len:0x10000 key:0x184000 00:18:44.846 [2024-05-15 11:42:15.590761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.846 [2024-05-15 11:42:15.590780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8af600 len:0x10000 key:0x184000 00:18:44.846 [2024-05-15 11:42:15.590794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.846 [2024-05-15 11:42:15.590813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b89f580 len:0x10000 key:0x184000 00:18:44.846 [2024-05-15 11:42:15.590828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:44.846 [2024-05-15 11:42:15.590848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x183b00 00:18:44.846 [2024-05-15 11:42:15.590862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256040 sqhd:9140 p:0 m:0 dnr:0 00:18:45.105 [2024-05-15 11:42:15.610077] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019206c40 was disconnected and freed. reset controller. 00:18:45.105 [2024-05-15 11:42:15.610133] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:45.105 [2024-05-15 11:42:15.610292] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:45.105 [2024-05-15 11:42:15.610337] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:45.105 [2024-05-15 11:42:15.610380] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:45.105 [2024-05-15 11:42:15.610422] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:45.105 [2024-05-15 11:42:15.610462] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:45.105 [2024-05-15 11:42:15.610503] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:45.105 [2024-05-15 11:42:15.610544] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:45.105 [2024-05-15 11:42:15.610585] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:45.105 [2024-05-15 11:42:15.610625] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:45.105 [2024-05-15 11:42:15.610666] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:45.105 [2024-05-15 11:42:15.611316] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:45.105 [2024-05-15 11:42:15.611331] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:18:45.105 [2024-05-15 11:42:15.611342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:18:45.105 [2024-05-15 11:42:15.611353] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:18:45.105 [2024-05-15 11:42:15.611866] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:18:45.105 [2024-05-15 11:42:15.611880] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:18:45.105 [2024-05-15 11:42:15.611890] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:18:45.105 [2024-05-15 11:42:15.611901] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:18:45.105 [2024-05-15 11:42:15.611911] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:18:45.105 [2024-05-15 11:42:15.611922] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:18:45.105 task offset: 37888 on job bdev=Nvme1n1 fails 00:18:45.105 00:18:45.105 Latency(us) 00:18:45.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.105 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:45.105 Job: Nvme1n1 ended in about 1.93 seconds with error 00:18:45.105 Verification LBA range: start 0x0 length 0x400 00:18:45.105 Nvme1n1 : 1.93 140.92 8.81 33.16 0.00 363529.46 6468.12 1043105.17 00:18:45.105 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:45.105 Job: Nvme2n1 ended in about 1.93 seconds with error 00:18:45.105 Verification LBA range: start 0x0 length 0x400 00:18:45.105 Nvme2n1 : 1.93 149.11 9.32 33.14 0.00 343987.38 6610.59 1043105.17 00:18:45.105 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:45.105 Job: Nvme3n1 ended in about 1.93 seconds with error 00:18:45.105 Verification LBA range: start 0x0 length 0x400 00:18:45.105 Nvme3n1 : 1.93 150.58 9.41 33.12 0.00 338373.97 15842.62 1043105.17 00:18:45.105 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:45.105 Job: Nvme4n1 ended in about 1.93 seconds with error 00:18:45.105 Verification LBA range: start 0x0 length 0x400 00:18:45.105 Nvme4n1 : 1.93 149.98 9.37 33.10 0.00 336611.47 24732.72 1043105.17 00:18:45.105 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:45.105 Job: Nvme5n1 ended in about 1.93 seconds with error 00:18:45.105 Verification LBA range: start 0x0 length 0x400 00:18:45.105 Nvme5n1 : 1.93 140.59 8.79 33.08 0.00 351996.84 32369.09 1035810.73 00:18:45.105 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:45.105 Job: Nvme6n1 ended in about 1.94 seconds with error 00:18:45.105 Verification LBA range: start 0x0 length 0x400 00:18:45.106 Nvme6n1 : 1.94 148.26 9.27 33.06 0.00 334345.14 37156.06 1035810.73 00:18:45.106 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:45.106 Job: Nvme7n1 ended in about 1.94 seconds with error 00:18:45.106 Verification LBA range: start 0x0 length 0x400 00:18:45.106 Nvme7n1 : 1.94 147.14 9.20 33.04 0.00 333356.58 46046.16 1035810.73 00:18:45.106 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:45.106 Job: Nvme8n1 ended in about 1.94 seconds with error 00:18:45.106 Verification LBA range: start 0x0 length 0x400 00:18:45.106 Nvme8n1 : 1.94 147.57 9.22 33.02 0.00 329593.81 52884.70 1028516.29 00:18:45.106 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:45.106 Job: Nvme9n1 ended in about 1.89 seconds with error 00:18:45.106 Verification LBA range: start 0x0 length 0x400 00:18:45.106 Nvme9n1 : 1.89 135.39 8.46 33.85 0.00 350861.22 70664.90 1086871.82 00:18:45.106 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:45.106 Job: Nvme10n1 ended in about 1.90 seconds with error 00:18:45.106 Verification LBA range: start 0x0 length 0x400 00:18:45.106 Nvme10n1 : 1.90 135.02 8.44 33.76 0.00 348615.90 66105.88 1072282.94 00:18:45.106 =================================================================================================================== 00:18:45.106 Total : 1444.55 90.28 332.32 0.00 342877.80 6468.12 1086871.82 00:18:45.106 [2024-05-15 11:42:15.656530] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:45.106 [2024-05-15 11:42:15.657779] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:45.106 [2024-05-15 11:42:15.657824] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:45.106 [2024-05-15 11:42:15.657852] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:18:45.106 [2024-05-15 11:42:15.657973] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:45.106 [2024-05-15 11:42:15.658008] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:45.106 [2024-05-15 11:42:15.658032] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:18:45.106 [2024-05-15 11:42:15.658167] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:45.106 [2024-05-15 11:42:15.658201] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:45.106 [2024-05-15 11:42:15.658234] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:18:45.106 [2024-05-15 11:42:15.658342] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:45.106 [2024-05-15 11:42:15.658376] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:45.106 [2024-05-15 11:42:15.658400] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:18:45.106 [2024-05-15 11:42:15.658660] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:45.106 [2024-05-15 11:42:15.658697] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:45.106 [2024-05-15 11:42:15.658722] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bf1c0 00:18:45.106 [2024-05-15 11:42:15.658846] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:45.106 [2024-05-15 11:42:15.658880] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:45.106 [2024-05-15 11:42:15.658887] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:18:45.106 [2024-05-15 11:42:15.658964] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:45.106 [2024-05-15 11:42:15.658975] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:45.106 [2024-05-15 11:42:15.658982] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:18:45.106 [2024-05-15 11:42:15.659046] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:45.106 [2024-05-15 11:42:15.659059] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:45.106 [2024-05-15 11:42:15.659067] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e080 00:18:45.106 [2024-05-15 11:42:15.659127] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:45.106 [2024-05-15 11:42:15.659138] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:45.106 [2024-05-15 11:42:15.659146] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:18:45.106 [2024-05-15 11:42:15.659233] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:45.106 [2024-05-15 11:42:15.659243] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:45.106 [2024-05-15 11:42:15.659251] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:18:45.364 11:42:16 -- target/shutdown.sh@142 -- # kill -9 3060540 00:18:45.364 11:42:16 -- target/shutdown.sh@144 -- # stoptarget 00:18:45.364 11:42:16 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:18:45.364 11:42:16 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:45.364 11:42:16 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:45.364 11:42:16 -- target/shutdown.sh@45 -- # nvmftestfini 00:18:45.364 11:42:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:45.364 11:42:16 -- nvmf/common.sh@117 -- # sync 00:18:45.364 11:42:16 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:45.364 11:42:16 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:45.364 11:42:16 -- nvmf/common.sh@120 -- # set +e 00:18:45.364 11:42:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:45.364 11:42:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:45.364 rmmod nvme_rdma 00:18:45.364 rmmod nvme_fabrics 00:18:45.623 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 3060540 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:18:45.623 11:42:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:45.623 11:42:16 -- nvmf/common.sh@124 -- # set -e 00:18:45.623 11:42:16 -- nvmf/common.sh@125 -- # return 0 00:18:45.623 11:42:16 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:45.623 11:42:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:45.623 11:42:16 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:45.623 00:18:45.623 real 0m5.443s 00:18:45.623 user 0m18.411s 00:18:45.623 sys 0m1.428s 00:18:45.623 11:42:16 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:45.623 11:42:16 -- common/autotest_common.sh@10 -- # set +x 00:18:45.623 ************************************ 00:18:45.623 END TEST nvmf_shutdown_tc3 00:18:45.623 ************************************ 00:18:45.623 11:42:16 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:18:45.623 00:18:45.623 real 0m24.721s 00:18:45.623 user 1m13.257s 00:18:45.623 sys 0m8.808s 00:18:45.623 11:42:16 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:45.623 11:42:16 -- common/autotest_common.sh@10 -- # set +x 00:18:45.623 ************************************ 00:18:45.623 END TEST nvmf_shutdown 00:18:45.623 ************************************ 00:18:45.623 11:42:16 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:18:45.623 11:42:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.623 11:42:16 -- common/autotest_common.sh@10 -- # set +x 00:18:45.623 11:42:16 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:18:45.623 11:42:16 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:45.623 11:42:16 -- common/autotest_common.sh@10 -- # set +x 00:18:45.623 11:42:16 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:18:45.623 11:42:16 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:18:45.623 11:42:16 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:45.623 11:42:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:45.623 11:42:16 -- common/autotest_common.sh@10 -- # set +x 00:18:45.623 ************************************ 00:18:45.623 START TEST nvmf_multicontroller 00:18:45.623 ************************************ 00:18:45.623 11:42:16 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:18:45.884 * Looking for test storage... 00:18:45.884 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:45.884 11:42:16 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.884 11:42:16 -- nvmf/common.sh@7 -- # uname -s 00:18:45.884 11:42:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.884 11:42:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.884 11:42:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.884 11:42:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.884 11:42:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.884 11:42:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.884 11:42:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.884 11:42:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.884 11:42:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.884 11:42:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.884 11:42:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:18:45.884 11:42:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:18:45.884 11:42:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.884 11:42:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.884 11:42:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.884 11:42:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.884 11:42:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:45.884 11:42:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.884 11:42:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.884 11:42:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.884 11:42:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.884 11:42:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.884 11:42:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.884 11:42:16 -- paths/export.sh@5 -- # export PATH 00:18:45.884 11:42:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.884 11:42:16 -- nvmf/common.sh@47 -- # : 0 00:18:45.884 11:42:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:45.884 11:42:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:45.884 11:42:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.884 11:42:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.884 11:42:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.884 11:42:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:45.884 11:42:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:45.884 11:42:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:45.884 11:42:16 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.884 11:42:16 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.884 11:42:16 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:18:45.884 11:42:16 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:18:45.884 11:42:16 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:45.884 11:42:16 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:18:45.884 11:42:16 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:18:45.884 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:18:45.884 11:42:16 -- host/multicontroller.sh@20 -- # exit 0 00:18:45.884 00:18:45.884 real 0m0.131s 00:18:45.884 user 0m0.052s 00:18:45.884 sys 0m0.090s 00:18:45.884 11:42:16 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:45.884 11:42:16 -- common/autotest_common.sh@10 -- # set +x 00:18:45.884 ************************************ 00:18:45.884 END TEST nvmf_multicontroller 00:18:45.884 ************************************ 00:18:45.884 11:42:16 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:18:45.884 11:42:16 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:45.884 11:42:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:45.884 11:42:16 -- common/autotest_common.sh@10 -- # set +x 00:18:45.884 ************************************ 00:18:45.884 START TEST nvmf_aer 00:18:45.884 ************************************ 00:18:45.884 11:42:16 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:18:45.884 * Looking for test storage... 00:18:45.884 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:45.884 11:42:16 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.884 11:42:16 -- nvmf/common.sh@7 -- # uname -s 00:18:45.884 11:42:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.884 11:42:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.884 11:42:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.884 11:42:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.884 11:42:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.884 11:42:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.884 11:42:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.884 11:42:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.884 11:42:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.884 11:42:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.884 11:42:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:18:45.884 11:42:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:18:45.884 11:42:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.884 11:42:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.884 11:42:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.884 11:42:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.884 11:42:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:45.884 11:42:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.884 11:42:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.884 11:42:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.884 11:42:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.884 11:42:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.884 11:42:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.884 11:42:16 -- paths/export.sh@5 -- # export PATH 00:18:45.884 11:42:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.884 11:42:16 -- nvmf/common.sh@47 -- # : 0 00:18:45.884 11:42:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:45.884 11:42:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:45.884 11:42:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.885 11:42:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.885 11:42:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.885 11:42:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:45.885 11:42:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:45.885 11:42:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:45.885 11:42:16 -- host/aer.sh@11 -- # nvmftestinit 00:18:45.885 11:42:16 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:45.885 11:42:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.885 11:42:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:45.885 11:42:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:45.885 11:42:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:45.885 11:42:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.885 11:42:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.885 11:42:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.144 11:42:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:46.144 11:42:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:46.144 11:42:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:46.144 11:42:16 -- common/autotest_common.sh@10 -- # set +x 00:18:51.416 11:42:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:51.416 11:42:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:51.416 11:42:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:51.416 11:42:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:51.416 11:42:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:51.416 11:42:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:51.416 11:42:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:51.416 11:42:21 -- nvmf/common.sh@295 -- # net_devs=() 00:18:51.416 11:42:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:51.416 11:42:21 -- nvmf/common.sh@296 -- # e810=() 00:18:51.416 11:42:21 -- nvmf/common.sh@296 -- # local -ga e810 00:18:51.416 11:42:21 -- nvmf/common.sh@297 -- # x722=() 00:18:51.416 11:42:21 -- nvmf/common.sh@297 -- # local -ga x722 00:18:51.416 11:42:21 -- nvmf/common.sh@298 -- # mlx=() 00:18:51.416 11:42:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:51.416 11:42:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.416 11:42:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.416 11:42:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.416 11:42:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.416 11:42:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.416 11:42:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.416 11:42:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.416 11:42:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.416 11:42:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.416 11:42:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.416 11:42:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.416 11:42:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:51.416 11:42:21 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:51.416 11:42:21 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:51.416 11:42:21 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:51.416 11:42:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:51.416 11:42:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.416 11:42:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:51.416 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:51.416 11:42:21 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:51.416 11:42:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.416 11:42:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:51.416 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:51.416 11:42:21 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:51.416 11:42:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:51.416 11:42:21 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.416 11:42:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.416 11:42:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:51.416 11:42:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.416 11:42:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:51.416 Found net devices under 0000:18:00.0: mlx_0_0 00:18:51.416 11:42:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.416 11:42:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.416 11:42:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.416 11:42:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:51.416 11:42:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.416 11:42:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:51.416 Found net devices under 0000:18:00.1: mlx_0_1 00:18:51.416 11:42:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.416 11:42:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:51.416 11:42:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:51.416 11:42:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:51.416 11:42:21 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:51.416 11:42:21 -- nvmf/common.sh@58 -- # uname 00:18:51.416 11:42:21 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:51.416 11:42:21 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:51.416 11:42:21 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:51.416 11:42:21 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:51.416 11:42:21 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:51.416 11:42:21 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:51.416 11:42:21 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:51.416 11:42:21 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:51.416 11:42:21 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:51.416 11:42:21 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:51.416 11:42:21 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:51.416 11:42:21 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:51.416 11:42:21 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:51.416 11:42:21 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:51.416 11:42:21 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:51.416 11:42:21 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:51.416 11:42:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:51.416 11:42:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.416 11:42:21 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:51.416 11:42:21 -- nvmf/common.sh@105 -- # continue 2 00:18:51.416 11:42:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:51.416 11:42:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.416 11:42:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.416 11:42:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:51.416 11:42:21 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:51.416 11:42:21 -- nvmf/common.sh@105 -- # continue 2 00:18:51.416 11:42:21 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:51.416 11:42:21 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:51.416 11:42:21 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:51.416 11:42:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:51.416 11:42:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:51.416 11:42:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:51.416 11:42:22 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:51.416 11:42:22 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:51.416 11:42:22 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:51.416 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:51.416 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:18:51.416 altname enp24s0f0np0 00:18:51.416 altname ens785f0np0 00:18:51.416 inet 192.168.100.8/24 scope global mlx_0_0 00:18:51.416 valid_lft forever preferred_lft forever 00:18:51.416 11:42:22 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:51.416 11:42:22 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:51.416 11:42:22 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:51.416 11:42:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:51.416 11:42:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:51.416 11:42:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:51.416 11:42:22 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:51.416 11:42:22 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:51.416 11:42:22 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:51.416 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:51.416 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:18:51.416 altname enp24s0f1np1 00:18:51.416 altname ens785f1np1 00:18:51.416 inet 192.168.100.9/24 scope global mlx_0_1 00:18:51.416 valid_lft forever preferred_lft forever 00:18:51.416 11:42:22 -- nvmf/common.sh@411 -- # return 0 00:18:51.416 11:42:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:51.416 11:42:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:51.416 11:42:22 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:51.416 11:42:22 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:51.416 11:42:22 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:51.416 11:42:22 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:51.416 11:42:22 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:51.416 11:42:22 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:51.416 11:42:22 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:51.416 11:42:22 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:51.416 11:42:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:51.416 11:42:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.416 11:42:22 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:51.416 11:42:22 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:51.416 11:42:22 -- nvmf/common.sh@105 -- # continue 2 00:18:51.416 11:42:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:51.416 11:42:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.416 11:42:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:51.416 11:42:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.416 11:42:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:51.416 11:42:22 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:51.416 11:42:22 -- nvmf/common.sh@105 -- # continue 2 00:18:51.416 11:42:22 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:51.416 11:42:22 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:51.416 11:42:22 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:51.416 11:42:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:51.416 11:42:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:51.416 11:42:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:51.416 11:42:22 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:51.416 11:42:22 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:51.416 11:42:22 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:51.416 11:42:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:51.416 11:42:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:51.416 11:42:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:51.416 11:42:22 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:51.416 192.168.100.9' 00:18:51.416 11:42:22 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:51.416 192.168.100.9' 00:18:51.416 11:42:22 -- nvmf/common.sh@446 -- # head -n 1 00:18:51.416 11:42:22 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:51.416 11:42:22 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:51.416 192.168.100.9' 00:18:51.416 11:42:22 -- nvmf/common.sh@447 -- # tail -n +2 00:18:51.416 11:42:22 -- nvmf/common.sh@447 -- # head -n 1 00:18:51.416 11:42:22 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:51.416 11:42:22 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:51.416 11:42:22 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:51.416 11:42:22 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:51.416 11:42:22 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:51.416 11:42:22 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:51.416 11:42:22 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:18:51.416 11:42:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:51.416 11:42:22 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:51.416 11:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:51.416 11:42:22 -- nvmf/common.sh@470 -- # nvmfpid=3063909 00:18:51.416 11:42:22 -- nvmf/common.sh@471 -- # waitforlisten 3063909 00:18:51.416 11:42:22 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:51.416 11:42:22 -- common/autotest_common.sh@827 -- # '[' -z 3063909 ']' 00:18:51.416 11:42:22 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.416 11:42:22 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:51.416 11:42:22 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.416 11:42:22 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:51.416 11:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:51.675 [2024-05-15 11:42:22.179463] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:18:51.675 [2024-05-15 11:42:22.179522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.675 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.675 [2024-05-15 11:42:22.251334] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:51.675 [2024-05-15 11:42:22.343839] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.675 [2024-05-15 11:42:22.343876] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.675 [2024-05-15 11:42:22.343886] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.675 [2024-05-15 11:42:22.343909] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.675 [2024-05-15 11:42:22.343917] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.675 [2024-05-15 11:42:22.343959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.675 [2024-05-15 11:42:22.344043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.675 [2024-05-15 11:42:22.344124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:51.675 [2024-05-15 11:42:22.344126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.613 11:42:23 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:52.613 11:42:23 -- common/autotest_common.sh@860 -- # return 0 00:18:52.613 11:42:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:52.613 11:42:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.613 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:18:52.613 11:42:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.613 11:42:23 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:52.613 11:42:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.613 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:18:52.613 [2024-05-15 11:42:23.091258] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1447f00/0x144c3f0) succeed. 00:18:52.613 [2024-05-15 11:42:23.101995] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1449540/0x148da80) succeed. 00:18:52.613 11:42:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.613 11:42:23 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:18:52.613 11:42:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.613 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:18:52.613 Malloc0 00:18:52.613 11:42:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.613 11:42:23 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:18:52.613 11:42:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.613 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:18:52.613 11:42:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.613 11:42:23 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:52.613 11:42:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.613 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:18:52.613 11:42:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.613 11:42:23 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:52.613 11:42:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.613 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:18:52.613 [2024-05-15 11:42:23.280434] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:52.613 [2024-05-15 11:42:23.280824] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:52.613 11:42:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.613 11:42:23 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:18:52.613 11:42:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.613 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:18:52.613 [ 00:18:52.613 { 00:18:52.613 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:52.613 "subtype": "Discovery", 00:18:52.613 "listen_addresses": [], 00:18:52.613 "allow_any_host": true, 00:18:52.613 "hosts": [] 00:18:52.613 }, 00:18:52.613 { 00:18:52.613 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.613 "subtype": "NVMe", 00:18:52.613 "listen_addresses": [ 00:18:52.613 { 00:18:52.613 "trtype": "RDMA", 00:18:52.613 "adrfam": "IPv4", 00:18:52.613 "traddr": "192.168.100.8", 00:18:52.613 "trsvcid": "4420" 00:18:52.613 } 00:18:52.613 ], 00:18:52.613 "allow_any_host": true, 00:18:52.613 "hosts": [], 00:18:52.613 "serial_number": "SPDK00000000000001", 00:18:52.613 "model_number": "SPDK bdev Controller", 00:18:52.613 "max_namespaces": 2, 00:18:52.613 "min_cntlid": 1, 00:18:52.613 "max_cntlid": 65519, 00:18:52.613 "namespaces": [ 00:18:52.613 { 00:18:52.613 "nsid": 1, 00:18:52.613 "bdev_name": "Malloc0", 00:18:52.613 "name": "Malloc0", 00:18:52.613 "nguid": "E26B1F580D464002B762BAA204C6942C", 00:18:52.613 "uuid": "e26b1f58-0d46-4002-b762-baa204c6942c" 00:18:52.613 } 00:18:52.613 ] 00:18:52.613 } 00:18:52.613 ] 00:18:52.613 11:42:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.613 11:42:23 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:52.613 11:42:23 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:18:52.613 11:42:23 -- host/aer.sh@33 -- # aerpid=3064114 00:18:52.613 11:42:23 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:18:52.613 11:42:23 -- common/autotest_common.sh@1261 -- # local i=0 00:18:52.613 11:42:23 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:18:52.613 11:42:23 -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:52.613 11:42:23 -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:18:52.613 11:42:23 -- common/autotest_common.sh@1264 -- # i=1 00:18:52.613 11:42:23 -- common/autotest_common.sh@1265 -- # sleep 0.1 00:18:52.613 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.872 11:42:23 -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:52.872 11:42:23 -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:18:52.872 11:42:23 -- common/autotest_common.sh@1264 -- # i=2 00:18:52.872 11:42:23 -- common/autotest_common.sh@1265 -- # sleep 0.1 00:18:52.872 11:42:23 -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:52.872 11:42:23 -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:52.872 11:42:23 -- common/autotest_common.sh@1272 -- # return 0 00:18:52.872 11:42:23 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:18:52.872 11:42:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.872 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:18:52.872 Malloc1 00:18:52.872 11:42:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.872 11:42:23 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:18:52.872 11:42:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.872 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:18:52.872 11:42:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.872 11:42:23 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:18:52.872 11:42:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.872 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:18:52.872 [ 00:18:52.872 { 00:18:52.872 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:52.872 "subtype": "Discovery", 00:18:52.872 "listen_addresses": [], 00:18:52.872 "allow_any_host": true, 00:18:52.872 "hosts": [] 00:18:52.872 }, 00:18:52.872 { 00:18:52.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.872 "subtype": "NVMe", 00:18:52.872 "listen_addresses": [ 00:18:52.872 { 00:18:52.872 "trtype": "RDMA", 00:18:52.872 "adrfam": "IPv4", 00:18:52.872 "traddr": "192.168.100.8", 00:18:52.872 "trsvcid": "4420" 00:18:52.872 } 00:18:52.872 ], 00:18:52.872 "allow_any_host": true, 00:18:52.872 "hosts": [], 00:18:52.872 "serial_number": "SPDK00000000000001", 00:18:52.872 "model_number": "SPDK bdev Controller", 00:18:52.872 "max_namespaces": 2, 00:18:52.872 "min_cntlid": 1, 00:18:52.872 "max_cntlid": 65519, 00:18:52.872 "namespaces": [ 00:18:52.872 { 00:18:52.872 "nsid": 1, 00:18:52.872 "bdev_name": "Malloc0", 00:18:52.872 "name": "Malloc0", 00:18:52.872 "nguid": "E26B1F580D464002B762BAA204C6942C", 00:18:52.872 "uuid": "e26b1f58-0d46-4002-b762-baa204c6942c" 00:18:52.872 }, 00:18:52.872 { 00:18:52.872 "nsid": 2, 00:18:52.872 "bdev_name": "Malloc1", 00:18:52.872 "name": "Malloc1", 00:18:52.872 "nguid": "D05732E03A66493D85091CFCAA3ABA00", 00:18:52.872 "uuid": "d05732e0-3a66-493d-8509-1cfcaa3aba00" 00:18:52.872 } 00:18:52.872 ] 00:18:52.872 } 00:18:52.872 ] 00:18:52.872 11:42:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.872 11:42:23 -- host/aer.sh@43 -- # wait 3064114 00:18:52.872 Asynchronous Event Request test 00:18:52.872 Attaching to 192.168.100.8 00:18:52.872 Attached to 192.168.100.8 00:18:52.872 Registering asynchronous event callbacks... 00:18:52.872 Starting namespace attribute notice tests for all controllers... 00:18:52.872 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:52.872 aer_cb - Changed Namespace 00:18:52.872 Cleaning up... 00:18:52.872 11:42:23 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:52.872 11:42:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.872 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:18:53.131 11:42:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.131 11:42:23 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:53.131 11:42:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.131 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:18:53.131 11:42:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.131 11:42:23 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:53.131 11:42:23 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.131 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:18:53.131 11:42:23 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.131 11:42:23 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:18:53.131 11:42:23 -- host/aer.sh@51 -- # nvmftestfini 00:18:53.131 11:42:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:53.131 11:42:23 -- nvmf/common.sh@117 -- # sync 00:18:53.131 11:42:23 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:53.131 11:42:23 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:53.131 11:42:23 -- nvmf/common.sh@120 -- # set +e 00:18:53.131 11:42:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:53.131 11:42:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:53.131 rmmod nvme_rdma 00:18:53.131 rmmod nvme_fabrics 00:18:53.131 11:42:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:53.131 11:42:23 -- nvmf/common.sh@124 -- # set -e 00:18:53.131 11:42:23 -- nvmf/common.sh@125 -- # return 0 00:18:53.131 11:42:23 -- nvmf/common.sh@478 -- # '[' -n 3063909 ']' 00:18:53.131 11:42:23 -- nvmf/common.sh@479 -- # killprocess 3063909 00:18:53.131 11:42:23 -- common/autotest_common.sh@946 -- # '[' -z 3063909 ']' 00:18:53.131 11:42:23 -- common/autotest_common.sh@950 -- # kill -0 3063909 00:18:53.131 11:42:23 -- common/autotest_common.sh@951 -- # uname 00:18:53.131 11:42:23 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:53.131 11:42:23 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3063909 00:18:53.131 11:42:23 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:53.131 11:42:23 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:53.131 11:42:23 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3063909' 00:18:53.131 killing process with pid 3063909 00:18:53.131 11:42:23 -- common/autotest_common.sh@965 -- # kill 3063909 00:18:53.131 [2024-05-15 11:42:23.796657] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:53.131 11:42:23 -- common/autotest_common.sh@970 -- # wait 3063909 00:18:53.131 [2024-05-15 11:42:23.882614] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:18:53.391 11:42:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:53.391 11:42:24 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:53.391 00:18:53.391 real 0m7.559s 00:18:53.391 user 0m8.278s 00:18:53.391 sys 0m4.824s 00:18:53.391 11:42:24 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:53.391 11:42:24 -- common/autotest_common.sh@10 -- # set +x 00:18:53.391 ************************************ 00:18:53.391 END TEST nvmf_aer 00:18:53.391 ************************************ 00:18:53.391 11:42:24 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:18:53.391 11:42:24 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:53.391 11:42:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:53.391 11:42:24 -- common/autotest_common.sh@10 -- # set +x 00:18:53.651 ************************************ 00:18:53.651 START TEST nvmf_async_init 00:18:53.651 ************************************ 00:18:53.651 11:42:24 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:18:53.651 * Looking for test storage... 00:18:53.651 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:53.651 11:42:24 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.651 11:42:24 -- nvmf/common.sh@7 -- # uname -s 00:18:53.651 11:42:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.651 11:42:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.651 11:42:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.651 11:42:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.651 11:42:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.651 11:42:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.651 11:42:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.651 11:42:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.651 11:42:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.651 11:42:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.651 11:42:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:18:53.651 11:42:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:18:53.651 11:42:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.651 11:42:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.651 11:42:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.651 11:42:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.651 11:42:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:53.651 11:42:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.651 11:42:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.651 11:42:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.651 11:42:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.651 11:42:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.651 11:42:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.651 11:42:24 -- paths/export.sh@5 -- # export PATH 00:18:53.651 11:42:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.651 11:42:24 -- nvmf/common.sh@47 -- # : 0 00:18:53.651 11:42:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:53.651 11:42:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:53.651 11:42:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.651 11:42:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.651 11:42:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.651 11:42:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:53.651 11:42:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:53.651 11:42:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:53.651 11:42:24 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:18:53.651 11:42:24 -- host/async_init.sh@14 -- # null_block_size=512 00:18:53.651 11:42:24 -- host/async_init.sh@15 -- # null_bdev=null0 00:18:53.651 11:42:24 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:18:53.651 11:42:24 -- host/async_init.sh@20 -- # uuidgen 00:18:53.651 11:42:24 -- host/async_init.sh@20 -- # tr -d - 00:18:53.651 11:42:24 -- host/async_init.sh@20 -- # nguid=1a11a682f42a4b77b8c8074d3137ecb8 00:18:53.651 11:42:24 -- host/async_init.sh@22 -- # nvmftestinit 00:18:53.651 11:42:24 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:53.651 11:42:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.651 11:42:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:53.651 11:42:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:53.651 11:42:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:53.651 11:42:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.651 11:42:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.651 11:42:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.651 11:42:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:53.651 11:42:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:53.651 11:42:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:53.651 11:42:24 -- common/autotest_common.sh@10 -- # set +x 00:19:00.219 11:42:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:00.219 11:42:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:00.219 11:42:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:00.219 11:42:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:00.219 11:42:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:00.219 11:42:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:00.219 11:42:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:00.219 11:42:29 -- nvmf/common.sh@295 -- # net_devs=() 00:19:00.219 11:42:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:00.219 11:42:29 -- nvmf/common.sh@296 -- # e810=() 00:19:00.219 11:42:29 -- nvmf/common.sh@296 -- # local -ga e810 00:19:00.219 11:42:29 -- nvmf/common.sh@297 -- # x722=() 00:19:00.219 11:42:29 -- nvmf/common.sh@297 -- # local -ga x722 00:19:00.219 11:42:29 -- nvmf/common.sh@298 -- # mlx=() 00:19:00.219 11:42:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:00.219 11:42:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.219 11:42:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.219 11:42:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.219 11:42:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.219 11:42:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.219 11:42:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.219 11:42:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.219 11:42:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.219 11:42:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.219 11:42:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.219 11:42:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.219 11:42:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:00.219 11:42:29 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:00.219 11:42:29 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:00.219 11:42:29 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:00.219 11:42:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:00.219 11:42:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.219 11:42:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:00.219 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:00.219 11:42:29 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:00.219 11:42:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.219 11:42:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:00.219 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:00.219 11:42:29 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:00.219 11:42:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:00.219 11:42:29 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.219 11:42:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.219 11:42:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:00.219 11:42:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.219 11:42:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:00.219 Found net devices under 0000:18:00.0: mlx_0_0 00:19:00.219 11:42:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.219 11:42:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.219 11:42:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.219 11:42:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:00.219 11:42:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.219 11:42:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:00.219 Found net devices under 0000:18:00.1: mlx_0_1 00:19:00.219 11:42:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.219 11:42:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:00.219 11:42:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:00.219 11:42:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:00.219 11:42:29 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:00.219 11:42:29 -- nvmf/common.sh@58 -- # uname 00:19:00.219 11:42:29 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:00.219 11:42:29 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:00.219 11:42:29 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:00.219 11:42:29 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:00.219 11:42:29 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:00.219 11:42:29 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:00.219 11:42:29 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:00.219 11:42:29 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:00.219 11:42:29 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:00.219 11:42:29 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:00.219 11:42:29 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:00.219 11:42:29 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:00.219 11:42:29 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:00.219 11:42:29 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:00.219 11:42:29 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:00.219 11:42:29 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:00.219 11:42:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:00.219 11:42:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:00.219 11:42:29 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:00.219 11:42:29 -- nvmf/common.sh@105 -- # continue 2 00:19:00.219 11:42:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:00.219 11:42:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:00.219 11:42:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:00.219 11:42:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:00.219 11:42:29 -- nvmf/common.sh@105 -- # continue 2 00:19:00.219 11:42:29 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:00.219 11:42:29 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:00.219 11:42:29 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:00.219 11:42:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:00.219 11:42:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:00.219 11:42:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:00.219 11:42:29 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:00.219 11:42:29 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:00.219 11:42:29 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:00.219 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:00.219 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:19:00.219 altname enp24s0f0np0 00:19:00.219 altname ens785f0np0 00:19:00.219 inet 192.168.100.8/24 scope global mlx_0_0 00:19:00.219 valid_lft forever preferred_lft forever 00:19:00.219 11:42:29 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:00.219 11:42:29 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:00.219 11:42:29 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:00.219 11:42:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:00.219 11:42:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:00.219 11:42:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:00.219 11:42:29 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:00.220 11:42:29 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:00.220 11:42:29 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:00.220 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:00.220 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:19:00.220 altname enp24s0f1np1 00:19:00.220 altname ens785f1np1 00:19:00.220 inet 192.168.100.9/24 scope global mlx_0_1 00:19:00.220 valid_lft forever preferred_lft forever 00:19:00.220 11:42:29 -- nvmf/common.sh@411 -- # return 0 00:19:00.220 11:42:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:00.220 11:42:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:00.220 11:42:29 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:00.220 11:42:29 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:00.220 11:42:29 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:00.220 11:42:29 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:00.220 11:42:29 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:00.220 11:42:29 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:00.220 11:42:29 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:00.220 11:42:29 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:00.220 11:42:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:00.220 11:42:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:00.220 11:42:29 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:00.220 11:42:29 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:00.220 11:42:29 -- nvmf/common.sh@105 -- # continue 2 00:19:00.220 11:42:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:00.220 11:42:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:00.220 11:42:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:00.220 11:42:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:00.220 11:42:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:00.220 11:42:29 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:00.220 11:42:29 -- nvmf/common.sh@105 -- # continue 2 00:19:00.220 11:42:29 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:00.220 11:42:29 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:00.220 11:42:29 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:00.220 11:42:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:00.220 11:42:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:00.220 11:42:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:00.220 11:42:29 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:00.220 11:42:29 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:00.220 11:42:29 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:00.220 11:42:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:00.220 11:42:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:00.220 11:42:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:00.220 11:42:29 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:00.220 192.168.100.9' 00:19:00.220 11:42:29 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:00.220 192.168.100.9' 00:19:00.220 11:42:29 -- nvmf/common.sh@446 -- # head -n 1 00:19:00.220 11:42:29 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:00.220 11:42:29 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:00.220 192.168.100.9' 00:19:00.220 11:42:29 -- nvmf/common.sh@447 -- # tail -n +2 00:19:00.220 11:42:29 -- nvmf/common.sh@447 -- # head -n 1 00:19:00.220 11:42:29 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:00.220 11:42:29 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:00.220 11:42:29 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:00.220 11:42:29 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:00.220 11:42:29 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:00.220 11:42:29 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:00.220 11:42:30 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:00.220 11:42:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:00.220 11:42:30 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:00.220 11:42:30 -- common/autotest_common.sh@10 -- # set +x 00:19:00.220 11:42:30 -- nvmf/common.sh@470 -- # nvmfpid=3066936 00:19:00.220 11:42:30 -- nvmf/common.sh@471 -- # waitforlisten 3066936 00:19:00.220 11:42:30 -- common/autotest_common.sh@827 -- # '[' -z 3066936 ']' 00:19:00.220 11:42:30 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.220 11:42:30 -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:00.220 11:42:30 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.220 11:42:30 -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:00.220 11:42:30 -- common/autotest_common.sh@10 -- # set +x 00:19:00.220 11:42:30 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:00.220 [2024-05-15 11:42:30.060573] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:19:00.220 [2024-05-15 11:42:30.060635] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.220 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.220 [2024-05-15 11:42:30.132216] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.220 [2024-05-15 11:42:30.215879] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.220 [2024-05-15 11:42:30.215920] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.220 [2024-05-15 11:42:30.215930] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.220 [2024-05-15 11:42:30.215939] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.220 [2024-05-15 11:42:30.215946] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.220 [2024-05-15 11:42:30.215977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.220 11:42:30 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:00.220 11:42:30 -- common/autotest_common.sh@860 -- # return 0 00:19:00.220 11:42:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:00.220 11:42:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:00.220 11:42:30 -- common/autotest_common.sh@10 -- # set +x 00:19:00.220 11:42:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.220 11:42:30 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:19:00.220 11:42:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.220 11:42:30 -- common/autotest_common.sh@10 -- # set +x 00:19:00.220 [2024-05-15 11:42:30.922053] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x227cdb0/0x22812a0) succeed. 00:19:00.220 [2024-05-15 11:42:30.931339] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x227e2b0/0x22c2930) succeed. 00:19:00.220 11:42:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.220 11:42:30 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:00.220 11:42:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.220 11:42:30 -- common/autotest_common.sh@10 -- # set +x 00:19:00.480 null0 00:19:00.480 11:42:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.480 11:42:30 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:00.480 11:42:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.480 11:42:30 -- common/autotest_common.sh@10 -- # set +x 00:19:00.480 11:42:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.480 11:42:30 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:00.480 11:42:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.480 11:42:30 -- common/autotest_common.sh@10 -- # set +x 00:19:00.480 11:42:30 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.480 11:42:30 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1a11a682f42a4b77b8c8074d3137ecb8 00:19:00.480 11:42:30 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.480 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:19:00.480 11:42:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.480 11:42:31 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:19:00.480 11:42:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.480 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:19:00.480 [2024-05-15 11:42:31.012588] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:00.480 [2024-05-15 11:42:31.012944] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:00.480 11:42:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.480 11:42:31 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:00.480 11:42:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.480 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:19:00.480 nvme0n1 00:19:00.480 11:42:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.480 11:42:31 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:00.480 11:42:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.480 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:19:00.480 [ 00:19:00.480 { 00:19:00.480 "name": "nvme0n1", 00:19:00.480 "aliases": [ 00:19:00.480 "1a11a682-f42a-4b77-b8c8-074d3137ecb8" 00:19:00.480 ], 00:19:00.480 "product_name": "NVMe disk", 00:19:00.480 "block_size": 512, 00:19:00.480 "num_blocks": 2097152, 00:19:00.480 "uuid": "1a11a682-f42a-4b77-b8c8-074d3137ecb8", 00:19:00.480 "assigned_rate_limits": { 00:19:00.480 "rw_ios_per_sec": 0, 00:19:00.480 "rw_mbytes_per_sec": 0, 00:19:00.480 "r_mbytes_per_sec": 0, 00:19:00.480 "w_mbytes_per_sec": 0 00:19:00.480 }, 00:19:00.480 "claimed": false, 00:19:00.480 "zoned": false, 00:19:00.480 "supported_io_types": { 00:19:00.480 "read": true, 00:19:00.480 "write": true, 00:19:00.480 "unmap": false, 00:19:00.480 "write_zeroes": true, 00:19:00.480 "flush": true, 00:19:00.480 "reset": true, 00:19:00.480 "compare": true, 00:19:00.480 "compare_and_write": true, 00:19:00.480 "abort": true, 00:19:00.480 "nvme_admin": true, 00:19:00.480 "nvme_io": true 00:19:00.480 }, 00:19:00.480 "memory_domains": [ 00:19:00.480 { 00:19:00.480 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:19:00.480 "dma_device_type": 0 00:19:00.480 } 00:19:00.480 ], 00:19:00.480 "driver_specific": { 00:19:00.480 "nvme": [ 00:19:00.480 { 00:19:00.480 "trid": { 00:19:00.480 "trtype": "RDMA", 00:19:00.480 "adrfam": "IPv4", 00:19:00.480 "traddr": "192.168.100.8", 00:19:00.480 "trsvcid": "4420", 00:19:00.480 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:00.480 }, 00:19:00.480 "ctrlr_data": { 00:19:00.480 "cntlid": 1, 00:19:00.480 "vendor_id": "0x8086", 00:19:00.480 "model_number": "SPDK bdev Controller", 00:19:00.480 "serial_number": "00000000000000000000", 00:19:00.480 "firmware_revision": "24.05", 00:19:00.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:00.480 "oacs": { 00:19:00.480 "security": 0, 00:19:00.480 "format": 0, 00:19:00.480 "firmware": 0, 00:19:00.480 "ns_manage": 0 00:19:00.480 }, 00:19:00.480 "multi_ctrlr": true, 00:19:00.480 "ana_reporting": false 00:19:00.480 }, 00:19:00.480 "vs": { 00:19:00.480 "nvme_version": "1.3" 00:19:00.480 }, 00:19:00.480 "ns_data": { 00:19:00.480 "id": 1, 00:19:00.480 "can_share": true 00:19:00.480 } 00:19:00.480 } 00:19:00.480 ], 00:19:00.480 "mp_policy": "active_passive" 00:19:00.480 } 00:19:00.480 } 00:19:00.480 ] 00:19:00.480 11:42:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.480 11:42:31 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:00.480 11:42:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.480 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:19:00.480 [2024-05-15 11:42:31.127953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:00.480 [2024-05-15 11:42:31.146993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:00.480 [2024-05-15 11:42:31.168714] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:00.480 11:42:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.480 11:42:31 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:00.480 11:42:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.480 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:19:00.480 [ 00:19:00.480 { 00:19:00.480 "name": "nvme0n1", 00:19:00.480 "aliases": [ 00:19:00.480 "1a11a682-f42a-4b77-b8c8-074d3137ecb8" 00:19:00.480 ], 00:19:00.480 "product_name": "NVMe disk", 00:19:00.480 "block_size": 512, 00:19:00.480 "num_blocks": 2097152, 00:19:00.480 "uuid": "1a11a682-f42a-4b77-b8c8-074d3137ecb8", 00:19:00.480 "assigned_rate_limits": { 00:19:00.480 "rw_ios_per_sec": 0, 00:19:00.480 "rw_mbytes_per_sec": 0, 00:19:00.480 "r_mbytes_per_sec": 0, 00:19:00.480 "w_mbytes_per_sec": 0 00:19:00.480 }, 00:19:00.480 "claimed": false, 00:19:00.480 "zoned": false, 00:19:00.480 "supported_io_types": { 00:19:00.480 "read": true, 00:19:00.480 "write": true, 00:19:00.480 "unmap": false, 00:19:00.480 "write_zeroes": true, 00:19:00.480 "flush": true, 00:19:00.480 "reset": true, 00:19:00.480 "compare": true, 00:19:00.480 "compare_and_write": true, 00:19:00.480 "abort": true, 00:19:00.480 "nvme_admin": true, 00:19:00.480 "nvme_io": true 00:19:00.480 }, 00:19:00.480 "memory_domains": [ 00:19:00.480 { 00:19:00.480 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:19:00.480 "dma_device_type": 0 00:19:00.480 } 00:19:00.480 ], 00:19:00.480 "driver_specific": { 00:19:00.480 "nvme": [ 00:19:00.480 { 00:19:00.480 "trid": { 00:19:00.480 "trtype": "RDMA", 00:19:00.480 "adrfam": "IPv4", 00:19:00.480 "traddr": "192.168.100.8", 00:19:00.480 "trsvcid": "4420", 00:19:00.480 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:00.480 }, 00:19:00.480 "ctrlr_data": { 00:19:00.480 "cntlid": 2, 00:19:00.480 "vendor_id": "0x8086", 00:19:00.481 "model_number": "SPDK bdev Controller", 00:19:00.481 "serial_number": "00000000000000000000", 00:19:00.481 "firmware_revision": "24.05", 00:19:00.481 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:00.481 "oacs": { 00:19:00.481 "security": 0, 00:19:00.481 "format": 0, 00:19:00.481 "firmware": 0, 00:19:00.481 "ns_manage": 0 00:19:00.481 }, 00:19:00.481 "multi_ctrlr": true, 00:19:00.481 "ana_reporting": false 00:19:00.481 }, 00:19:00.481 "vs": { 00:19:00.481 "nvme_version": "1.3" 00:19:00.481 }, 00:19:00.481 "ns_data": { 00:19:00.481 "id": 1, 00:19:00.481 "can_share": true 00:19:00.481 } 00:19:00.481 } 00:19:00.481 ], 00:19:00.481 "mp_policy": "active_passive" 00:19:00.481 } 00:19:00.481 } 00:19:00.481 ] 00:19:00.481 11:42:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.481 11:42:31 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.481 11:42:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.481 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:19:00.481 11:42:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.481 11:42:31 -- host/async_init.sh@53 -- # mktemp 00:19:00.481 11:42:31 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.UtunJ9WHGD 00:19:00.481 11:42:31 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:00.481 11:42:31 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.UtunJ9WHGD 00:19:00.481 11:42:31 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:00.481 11:42:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.481 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:19:00.481 11:42:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.481 11:42:31 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:19:00.481 11:42:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.481 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:19:00.481 [2024-05-15 11:42:31.232643] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:00.481 11:42:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.481 11:42:31 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UtunJ9WHGD 00:19:00.481 11:42:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.481 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:19:00.740 11:42:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.740 11:42:31 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UtunJ9WHGD 00:19:00.740 11:42:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.740 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:19:00.740 [2024-05-15 11:42:31.248666] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.740 nvme0n1 00:19:00.740 11:42:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.740 11:42:31 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:00.740 11:42:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.740 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:19:00.740 [ 00:19:00.740 { 00:19:00.740 "name": "nvme0n1", 00:19:00.740 "aliases": [ 00:19:00.740 "1a11a682-f42a-4b77-b8c8-074d3137ecb8" 00:19:00.740 ], 00:19:00.740 "product_name": "NVMe disk", 00:19:00.740 "block_size": 512, 00:19:00.740 "num_blocks": 2097152, 00:19:00.740 "uuid": "1a11a682-f42a-4b77-b8c8-074d3137ecb8", 00:19:00.740 "assigned_rate_limits": { 00:19:00.740 "rw_ios_per_sec": 0, 00:19:00.740 "rw_mbytes_per_sec": 0, 00:19:00.740 "r_mbytes_per_sec": 0, 00:19:00.741 "w_mbytes_per_sec": 0 00:19:00.741 }, 00:19:00.741 "claimed": false, 00:19:00.741 "zoned": false, 00:19:00.741 "supported_io_types": { 00:19:00.741 "read": true, 00:19:00.741 "write": true, 00:19:00.741 "unmap": false, 00:19:00.741 "write_zeroes": true, 00:19:00.741 "flush": true, 00:19:00.741 "reset": true, 00:19:00.741 "compare": true, 00:19:00.741 "compare_and_write": true, 00:19:00.741 "abort": true, 00:19:00.741 "nvme_admin": true, 00:19:00.741 "nvme_io": true 00:19:00.741 }, 00:19:00.741 "memory_domains": [ 00:19:00.741 { 00:19:00.741 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:19:00.741 "dma_device_type": 0 00:19:00.741 } 00:19:00.741 ], 00:19:00.741 "driver_specific": { 00:19:00.741 "nvme": [ 00:19:00.741 { 00:19:00.741 "trid": { 00:19:00.741 "trtype": "RDMA", 00:19:00.741 "adrfam": "IPv4", 00:19:00.741 "traddr": "192.168.100.8", 00:19:00.741 "trsvcid": "4421", 00:19:00.741 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:00.741 }, 00:19:00.741 "ctrlr_data": { 00:19:00.741 "cntlid": 3, 00:19:00.741 "vendor_id": "0x8086", 00:19:00.741 "model_number": "SPDK bdev Controller", 00:19:00.741 "serial_number": "00000000000000000000", 00:19:00.741 "firmware_revision": "24.05", 00:19:00.741 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:00.741 "oacs": { 00:19:00.741 "security": 0, 00:19:00.741 "format": 0, 00:19:00.741 "firmware": 0, 00:19:00.741 "ns_manage": 0 00:19:00.741 }, 00:19:00.741 "multi_ctrlr": true, 00:19:00.741 "ana_reporting": false 00:19:00.741 }, 00:19:00.741 "vs": { 00:19:00.741 "nvme_version": "1.3" 00:19:00.741 }, 00:19:00.741 "ns_data": { 00:19:00.741 "id": 1, 00:19:00.741 "can_share": true 00:19:00.741 } 00:19:00.741 } 00:19:00.741 ], 00:19:00.741 "mp_policy": "active_passive" 00:19:00.741 } 00:19:00.741 } 00:19:00.741 ] 00:19:00.741 11:42:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.741 11:42:31 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.741 11:42:31 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.741 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:19:00.741 11:42:31 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.741 11:42:31 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.UtunJ9WHGD 00:19:00.741 11:42:31 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:00.741 11:42:31 -- host/async_init.sh@78 -- # nvmftestfini 00:19:00.741 11:42:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:00.741 11:42:31 -- nvmf/common.sh@117 -- # sync 00:19:00.741 11:42:31 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:00.741 11:42:31 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:00.741 11:42:31 -- nvmf/common.sh@120 -- # set +e 00:19:00.741 11:42:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:00.741 11:42:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:00.741 rmmod nvme_rdma 00:19:00.741 rmmod nvme_fabrics 00:19:00.741 11:42:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:00.741 11:42:31 -- nvmf/common.sh@124 -- # set -e 00:19:00.741 11:42:31 -- nvmf/common.sh@125 -- # return 0 00:19:00.741 11:42:31 -- nvmf/common.sh@478 -- # '[' -n 3066936 ']' 00:19:00.741 11:42:31 -- nvmf/common.sh@479 -- # killprocess 3066936 00:19:00.741 11:42:31 -- common/autotest_common.sh@946 -- # '[' -z 3066936 ']' 00:19:00.741 11:42:31 -- common/autotest_common.sh@950 -- # kill -0 3066936 00:19:00.741 11:42:31 -- common/autotest_common.sh@951 -- # uname 00:19:00.741 11:42:31 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:00.741 11:42:31 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3066936 00:19:00.741 11:42:31 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:00.741 11:42:31 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:00.741 11:42:31 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3066936' 00:19:00.741 killing process with pid 3066936 00:19:00.741 11:42:31 -- common/autotest_common.sh@965 -- # kill 3066936 00:19:00.741 [2024-05-15 11:42:31.451538] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:00.741 11:42:31 -- common/autotest_common.sh@970 -- # wait 3066936 00:19:00.741 [2024-05-15 11:42:31.494473] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:19:01.000 11:42:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:01.000 11:42:31 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:01.000 00:19:01.000 real 0m7.551s 00:19:01.000 user 0m3.445s 00:19:01.000 sys 0m4.680s 00:19:01.000 11:42:31 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:01.000 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:19:01.000 ************************************ 00:19:01.000 END TEST nvmf_async_init 00:19:01.000 ************************************ 00:19:01.259 11:42:31 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:19:01.259 11:42:31 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:01.259 11:42:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:01.259 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:19:01.259 ************************************ 00:19:01.259 START TEST dma 00:19:01.259 ************************************ 00:19:01.259 11:42:31 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:19:01.259 * Looking for test storage... 00:19:01.259 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:01.259 11:42:31 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:01.259 11:42:31 -- nvmf/common.sh@7 -- # uname -s 00:19:01.259 11:42:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:01.259 11:42:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:01.259 11:42:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:01.259 11:42:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:01.259 11:42:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:01.259 11:42:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:01.259 11:42:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:01.259 11:42:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:01.259 11:42:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:01.259 11:42:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:01.259 11:42:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:19:01.259 11:42:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:19:01.259 11:42:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:01.259 11:42:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:01.259 11:42:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:01.259 11:42:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:01.259 11:42:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:01.259 11:42:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:01.259 11:42:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:01.259 11:42:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:01.259 11:42:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.259 11:42:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.259 11:42:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.259 11:42:31 -- paths/export.sh@5 -- # export PATH 00:19:01.259 11:42:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.259 11:42:31 -- nvmf/common.sh@47 -- # : 0 00:19:01.259 11:42:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:01.259 11:42:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:01.259 11:42:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:01.259 11:42:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:01.259 11:42:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:01.259 11:42:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:01.259 11:42:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:01.259 11:42:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:01.259 11:42:31 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:19:01.259 11:42:31 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:19:01.259 11:42:31 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:19:01.259 11:42:31 -- host/dma.sh@18 -- # subsystem=0 00:19:01.259 11:42:31 -- host/dma.sh@93 -- # nvmftestinit 00:19:01.259 11:42:31 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:19:01.259 11:42:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:01.259 11:42:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:01.259 11:42:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:01.259 11:42:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:01.259 11:42:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.259 11:42:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.260 11:42:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.260 11:42:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:01.260 11:42:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:01.260 11:42:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:01.260 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:19:07.848 11:42:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:07.848 11:42:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:07.848 11:42:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:07.848 11:42:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:07.848 11:42:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:07.848 11:42:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:07.848 11:42:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:07.848 11:42:38 -- nvmf/common.sh@295 -- # net_devs=() 00:19:07.848 11:42:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:07.848 11:42:38 -- nvmf/common.sh@296 -- # e810=() 00:19:07.848 11:42:38 -- nvmf/common.sh@296 -- # local -ga e810 00:19:07.848 11:42:38 -- nvmf/common.sh@297 -- # x722=() 00:19:07.848 11:42:38 -- nvmf/common.sh@297 -- # local -ga x722 00:19:07.848 11:42:38 -- nvmf/common.sh@298 -- # mlx=() 00:19:07.848 11:42:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:07.848 11:42:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.848 11:42:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.848 11:42:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.848 11:42:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.848 11:42:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.848 11:42:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.848 11:42:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.848 11:42:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.848 11:42:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.848 11:42:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.848 11:42:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.848 11:42:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:07.848 11:42:38 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:07.848 11:42:38 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:07.848 11:42:38 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:07.848 11:42:38 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:07.848 11:42:38 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:07.848 11:42:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:07.848 11:42:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.848 11:42:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:07.848 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:07.848 11:42:38 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:07.848 11:42:38 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:07.848 11:42:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:07.848 11:42:38 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:07.848 11:42:38 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:07.848 11:42:38 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:07.848 11:42:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.848 11:42:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:07.848 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:07.848 11:42:38 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:07.848 11:42:38 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:07.848 11:42:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:07.848 11:42:38 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:07.848 11:42:38 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:07.848 11:42:38 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:07.848 11:42:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:07.848 11:42:38 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:07.848 11:42:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.848 11:42:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.848 11:42:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:07.848 11:42:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.848 11:42:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:07.848 Found net devices under 0000:18:00.0: mlx_0_0 00:19:07.848 11:42:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.848 11:42:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.848 11:42:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.848 11:42:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:07.848 11:42:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.848 11:42:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:07.848 Found net devices under 0000:18:00.1: mlx_0_1 00:19:07.848 11:42:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.848 11:42:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:07.848 11:42:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:07.848 11:42:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:07.848 11:42:38 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:07.848 11:42:38 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:07.848 11:42:38 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:07.848 11:42:38 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:07.848 11:42:38 -- nvmf/common.sh@58 -- # uname 00:19:07.848 11:42:38 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:07.848 11:42:38 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:07.848 11:42:38 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:07.848 11:42:38 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:07.848 11:42:38 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:07.848 11:42:38 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:07.848 11:42:38 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:07.848 11:42:38 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:07.848 11:42:38 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:07.848 11:42:38 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:07.848 11:42:38 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:07.848 11:42:38 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:07.848 11:42:38 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:07.848 11:42:38 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:07.848 11:42:38 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:07.848 11:42:38 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:07.849 11:42:38 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:07.849 11:42:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.849 11:42:38 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:07.849 11:42:38 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:07.849 11:42:38 -- nvmf/common.sh@105 -- # continue 2 00:19:07.849 11:42:38 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:07.849 11:42:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.849 11:42:38 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:07.849 11:42:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.849 11:42:38 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:07.849 11:42:38 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:07.849 11:42:38 -- nvmf/common.sh@105 -- # continue 2 00:19:07.849 11:42:38 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:07.849 11:42:38 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:07.849 11:42:38 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:07.849 11:42:38 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:07.849 11:42:38 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:07.849 11:42:38 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:07.849 11:42:38 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:07.849 11:42:38 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:07.849 11:42:38 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:07.849 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:07.849 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:19:07.849 altname enp24s0f0np0 00:19:07.849 altname ens785f0np0 00:19:07.849 inet 192.168.100.8/24 scope global mlx_0_0 00:19:07.849 valid_lft forever preferred_lft forever 00:19:07.849 11:42:38 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:07.849 11:42:38 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:07.849 11:42:38 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:07.849 11:42:38 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:07.849 11:42:38 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:07.849 11:42:38 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:07.849 11:42:38 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:07.849 11:42:38 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:07.849 11:42:38 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:07.849 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:07.849 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:19:07.849 altname enp24s0f1np1 00:19:07.849 altname ens785f1np1 00:19:07.849 inet 192.168.100.9/24 scope global mlx_0_1 00:19:07.849 valid_lft forever preferred_lft forever 00:19:07.849 11:42:38 -- nvmf/common.sh@411 -- # return 0 00:19:07.849 11:42:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:07.849 11:42:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:07.849 11:42:38 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:07.849 11:42:38 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:07.849 11:42:38 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:07.849 11:42:38 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:07.849 11:42:38 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:07.849 11:42:38 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:07.849 11:42:38 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:07.849 11:42:38 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:07.849 11:42:38 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:07.849 11:42:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.849 11:42:38 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:07.849 11:42:38 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:07.849 11:42:38 -- nvmf/common.sh@105 -- # continue 2 00:19:07.849 11:42:38 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:07.849 11:42:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.849 11:42:38 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:07.849 11:42:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.849 11:42:38 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:07.849 11:42:38 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:07.849 11:42:38 -- nvmf/common.sh@105 -- # continue 2 00:19:07.849 11:42:38 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:07.849 11:42:38 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:07.849 11:42:38 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:07.849 11:42:38 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:07.849 11:42:38 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:07.849 11:42:38 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:07.849 11:42:38 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:07.849 11:42:38 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:07.849 11:42:38 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:07.849 11:42:38 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:07.849 11:42:38 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:07.849 11:42:38 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:07.849 11:42:38 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:07.849 192.168.100.9' 00:19:07.849 11:42:38 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:07.849 192.168.100.9' 00:19:07.849 11:42:38 -- nvmf/common.sh@446 -- # head -n 1 00:19:07.849 11:42:38 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:07.849 11:42:38 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:07.849 192.168.100.9' 00:19:07.849 11:42:38 -- nvmf/common.sh@447 -- # tail -n +2 00:19:07.849 11:42:38 -- nvmf/common.sh@447 -- # head -n 1 00:19:07.849 11:42:38 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:07.849 11:42:38 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:07.849 11:42:38 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:07.849 11:42:38 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:07.849 11:42:38 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:07.849 11:42:38 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:07.849 11:42:38 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:19:07.849 11:42:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:07.849 11:42:38 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:07.849 11:42:38 -- common/autotest_common.sh@10 -- # set +x 00:19:07.849 11:42:38 -- nvmf/common.sh@470 -- # nvmfpid=3070075 00:19:07.849 11:42:38 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:07.849 11:42:38 -- nvmf/common.sh@471 -- # waitforlisten 3070075 00:19:07.849 11:42:38 -- common/autotest_common.sh@827 -- # '[' -z 3070075 ']' 00:19:07.849 11:42:38 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.849 11:42:38 -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:07.849 11:42:38 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.849 11:42:38 -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:07.849 11:42:38 -- common/autotest_common.sh@10 -- # set +x 00:19:07.849 [2024-05-15 11:42:38.510931] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:19:07.849 [2024-05-15 11:42:38.510991] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.849 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.849 [2024-05-15 11:42:38.583520] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:08.109 [2024-05-15 11:42:38.674578] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.109 [2024-05-15 11:42:38.674620] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.109 [2024-05-15 11:42:38.674629] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.109 [2024-05-15 11:42:38.674653] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.109 [2024-05-15 11:42:38.674660] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.109 [2024-05-15 11:42:38.674707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.109 [2024-05-15 11:42:38.674710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.678 11:42:39 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:08.678 11:42:39 -- common/autotest_common.sh@860 -- # return 0 00:19:08.678 11:42:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:08.678 11:42:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.678 11:42:39 -- common/autotest_common.sh@10 -- # set +x 00:19:08.678 11:42:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.678 11:42:39 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:19:08.678 11:42:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.678 11:42:39 -- common/autotest_common.sh@10 -- # set +x 00:19:08.678 [2024-05-15 11:42:39.382118] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fc4930/0x1fc8e20) succeed. 00:19:08.678 [2024-05-15 11:42:39.391270] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fc5e30/0x200a4b0) succeed. 00:19:08.938 11:42:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.938 11:42:39 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:19:08.938 11:42:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.938 11:42:39 -- common/autotest_common.sh@10 -- # set +x 00:19:08.938 Malloc0 00:19:08.938 11:42:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.938 11:42:39 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:08.938 11:42:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.938 11:42:39 -- common/autotest_common.sh@10 -- # set +x 00:19:08.938 11:42:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.938 11:42:39 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:19:08.938 11:42:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.938 11:42:39 -- common/autotest_common.sh@10 -- # set +x 00:19:08.938 11:42:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.938 11:42:39 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:19:08.938 11:42:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.938 11:42:39 -- common/autotest_common.sh@10 -- # set +x 00:19:08.938 [2024-05-15 11:42:39.545530] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:08.938 [2024-05-15 11:42:39.545908] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:08.938 11:42:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.938 11:42:39 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:19:08.938 11:42:39 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:19:08.938 11:42:39 -- nvmf/common.sh@521 -- # config=() 00:19:08.938 11:42:39 -- nvmf/common.sh@521 -- # local subsystem config 00:19:08.938 11:42:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:08.938 11:42:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:08.938 { 00:19:08.938 "params": { 00:19:08.938 "name": "Nvme$subsystem", 00:19:08.938 "trtype": "$TEST_TRANSPORT", 00:19:08.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.939 "adrfam": "ipv4", 00:19:08.939 "trsvcid": "$NVMF_PORT", 00:19:08.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.939 "hdgst": ${hdgst:-false}, 00:19:08.939 "ddgst": ${ddgst:-false} 00:19:08.939 }, 00:19:08.939 "method": "bdev_nvme_attach_controller" 00:19:08.939 } 00:19:08.939 EOF 00:19:08.939 )") 00:19:08.939 11:42:39 -- nvmf/common.sh@543 -- # cat 00:19:08.939 11:42:39 -- nvmf/common.sh@545 -- # jq . 00:19:08.939 11:42:39 -- nvmf/common.sh@546 -- # IFS=, 00:19:08.939 11:42:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:08.939 "params": { 00:19:08.939 "name": "Nvme0", 00:19:08.939 "trtype": "rdma", 00:19:08.939 "traddr": "192.168.100.8", 00:19:08.939 "adrfam": "ipv4", 00:19:08.939 "trsvcid": "4420", 00:19:08.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:08.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:08.939 "hdgst": false, 00:19:08.939 "ddgst": false 00:19:08.939 }, 00:19:08.939 "method": "bdev_nvme_attach_controller" 00:19:08.939 }' 00:19:08.939 [2024-05-15 11:42:39.598327] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:19:08.939 [2024-05-15 11:42:39.598393] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070146 ] 00:19:08.939 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.939 [2024-05-15 11:42:39.668686] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:09.198 [2024-05-15 11:42:39.754974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:09.198 [2024-05-15 11:42:39.754976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.473 bdev Nvme0n1 reports 1 memory domains 00:19:14.473 bdev Nvme0n1 supports RDMA memory domain 00:19:14.473 Initialization complete, running randrw IO for 5 sec on 2 cores 00:19:14.473 ========================================================================== 00:19:14.473 Latency [us] 00:19:14.473 IOPS MiB/s Average min max 00:19:14.473 Core 2: 21621.48 84.46 739.21 250.22 9073.58 00:19:14.473 Core 3: 21817.81 85.23 732.53 239.83 9122.35 00:19:14.473 ========================================================================== 00:19:14.473 Total : 43439.28 169.68 735.86 239.83 9122.35 00:19:14.473 00:19:14.473 Total operations: 217272, translate 217272 pull_push 0 memzero 0 00:19:14.473 11:42:45 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:19:14.473 11:42:45 -- host/dma.sh@107 -- # gen_malloc_json 00:19:14.473 11:42:45 -- host/dma.sh@21 -- # jq . 00:19:14.732 [2024-05-15 11:42:45.261676] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:19:14.732 [2024-05-15 11:42:45.261742] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070904 ] 00:19:14.732 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.732 [2024-05-15 11:42:45.331274] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:14.732 [2024-05-15 11:42:45.413595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:14.732 [2024-05-15 11:42:45.413598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.076 bdev Malloc0 reports 2 memory domains 00:19:20.076 bdev Malloc0 doesn't support RDMA memory domain 00:19:20.076 Initialization complete, running randrw IO for 5 sec on 2 cores 00:19:20.076 ========================================================================== 00:19:20.076 Latency [us] 00:19:20.076 IOPS MiB/s Average min max 00:19:20.076 Core 2: 14461.34 56.49 1105.66 369.40 2085.99 00:19:20.076 Core 3: 14590.70 56.99 1095.83 416.15 2433.31 00:19:20.077 ========================================================================== 00:19:20.077 Total : 29052.04 113.48 1100.72 369.40 2433.31 00:19:20.077 00:19:20.077 Total operations: 145309, translate 0 pull_push 581236 memzero 0 00:19:20.077 11:42:50 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:19:20.077 11:42:50 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:19:20.077 11:42:50 -- host/dma.sh@48 -- # local subsystem=0 00:19:20.077 11:42:50 -- host/dma.sh@50 -- # jq . 00:19:20.077 Ignoring -M option 00:19:20.335 [2024-05-15 11:42:50.840154] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:19:20.335 [2024-05-15 11:42:50.840237] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071638 ] 00:19:20.335 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.335 [2024-05-15 11:42:50.913313] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:20.335 [2024-05-15 11:42:50.993865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:20.335 [2024-05-15 11:42:50.993868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.908 bdev d2431d2b-f07d-4c92-9d03-98f5f6ad0b80 reports 1 memory domains 00:19:26.908 bdev d2431d2b-f07d-4c92-9d03-98f5f6ad0b80 supports RDMA memory domain 00:19:26.908 Initialization complete, running randread IO for 5 sec on 2 cores 00:19:26.908 ========================================================================== 00:19:26.908 Latency [us] 00:19:26.908 IOPS MiB/s Average min max 00:19:26.908 Core 2: 67392.07 263.25 236.46 90.63 1591.50 00:19:26.908 Core 3: 67535.06 263.81 235.96 80.37 1572.83 00:19:26.908 ========================================================================== 00:19:26.908 Total : 134927.13 527.06 236.21 80.37 1591.50 00:19:26.908 00:19:26.908 Total operations: 674715, translate 0 pull_push 0 memzero 674715 00:19:26.908 11:42:56 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:19:26.908 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.908 [2024-05-15 11:42:56.583821] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:28.288 Initializing NVMe Controllers 00:19:28.288 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:19:28.288 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:19:28.288 Initialization complete. Launching workers. 00:19:28.288 ======================================================== 00:19:28.288 Latency(us) 00:19:28.288 Device Information : IOPS MiB/s Average min max 00:19:28.288 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7980.12 7950.28 7997.40 00:19:28.288 ======================================================== 00:19:28.288 Total : 2016.00 7.88 7980.12 7950.28 7997.40 00:19:28.288 00:19:28.288 11:42:58 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:19:28.288 11:42:58 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:19:28.288 11:42:58 -- host/dma.sh@48 -- # local subsystem=0 00:19:28.288 11:42:58 -- host/dma.sh@50 -- # jq . 00:19:28.288 [2024-05-15 11:42:58.939949] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:19:28.288 [2024-05-15 11:42:58.940000] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072682 ] 00:19:28.288 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.288 [2024-05-15 11:42:59.008024] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:28.547 [2024-05-15 11:42:59.094128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:28.547 [2024-05-15 11:42:59.094132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.824 bdev bd2c6157-1380-43a6-996e-7748c59a25e0 reports 1 memory domains 00:19:33.824 bdev bd2c6157-1380-43a6-996e-7748c59a25e0 supports RDMA memory domain 00:19:33.824 Initialization complete, running randrw IO for 5 sec on 2 cores 00:19:33.824 ========================================================================== 00:19:33.824 Latency [us] 00:19:33.824 IOPS MiB/s Average min max 00:19:33.824 Core 2: 19015.57 74.28 840.48 49.48 9489.58 00:19:33.824 Core 3: 19395.03 75.76 824.09 13.34 9665.86 00:19:33.824 ========================================================================== 00:19:33.824 Total : 38410.60 150.04 832.20 13.34 9665.86 00:19:33.824 00:19:33.824 Total operations: 192126, translate 192019 pull_push 0 memzero 107 00:19:34.083 11:43:04 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:19:34.083 11:43:04 -- host/dma.sh@120 -- # nvmftestfini 00:19:34.083 11:43:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:34.083 11:43:04 -- nvmf/common.sh@117 -- # sync 00:19:34.083 11:43:04 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:34.083 11:43:04 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:34.083 11:43:04 -- nvmf/common.sh@120 -- # set +e 00:19:34.083 11:43:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:34.083 11:43:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:34.083 rmmod nvme_rdma 00:19:34.083 rmmod nvme_fabrics 00:19:34.083 11:43:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:34.083 11:43:04 -- nvmf/common.sh@124 -- # set -e 00:19:34.083 11:43:04 -- nvmf/common.sh@125 -- # return 0 00:19:34.083 11:43:04 -- nvmf/common.sh@478 -- # '[' -n 3070075 ']' 00:19:34.083 11:43:04 -- nvmf/common.sh@479 -- # killprocess 3070075 00:19:34.083 11:43:04 -- common/autotest_common.sh@946 -- # '[' -z 3070075 ']' 00:19:34.083 11:43:04 -- common/autotest_common.sh@950 -- # kill -0 3070075 00:19:34.083 11:43:04 -- common/autotest_common.sh@951 -- # uname 00:19:34.083 11:43:04 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:34.083 11:43:04 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3070075 00:19:34.083 11:43:04 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:34.083 11:43:04 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:34.083 11:43:04 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3070075' 00:19:34.083 killing process with pid 3070075 00:19:34.083 11:43:04 -- common/autotest_common.sh@965 -- # kill 3070075 00:19:34.083 [2024-05-15 11:43:04.679900] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:34.083 11:43:04 -- common/autotest_common.sh@970 -- # wait 3070075 00:19:34.083 [2024-05-15 11:43:04.734432] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:19:34.342 11:43:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:34.342 11:43:05 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:34.342 00:19:34.342 real 0m33.210s 00:19:34.342 user 1m37.535s 00:19:34.342 sys 0m6.256s 00:19:34.342 11:43:05 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:34.342 11:43:05 -- common/autotest_common.sh@10 -- # set +x 00:19:34.342 ************************************ 00:19:34.342 END TEST dma 00:19:34.342 ************************************ 00:19:34.342 11:43:05 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:19:34.342 11:43:05 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:34.342 11:43:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:34.342 11:43:05 -- common/autotest_common.sh@10 -- # set +x 00:19:34.602 ************************************ 00:19:34.602 START TEST nvmf_identify 00:19:34.602 ************************************ 00:19:34.602 11:43:05 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:19:34.602 * Looking for test storage... 00:19:34.602 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:34.602 11:43:05 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.602 11:43:05 -- nvmf/common.sh@7 -- # uname -s 00:19:34.602 11:43:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.602 11:43:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.602 11:43:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.602 11:43:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.602 11:43:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.602 11:43:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.602 11:43:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.602 11:43:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.602 11:43:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.602 11:43:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.602 11:43:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:19:34.602 11:43:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:19:34.602 11:43:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.602 11:43:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.602 11:43:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.602 11:43:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.602 11:43:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:34.602 11:43:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.602 11:43:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.602 11:43:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.602 11:43:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.602 11:43:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.602 11:43:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.602 11:43:05 -- paths/export.sh@5 -- # export PATH 00:19:34.602 11:43:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.602 11:43:05 -- nvmf/common.sh@47 -- # : 0 00:19:34.602 11:43:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:34.602 11:43:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:34.602 11:43:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.602 11:43:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.603 11:43:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.603 11:43:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:34.603 11:43:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:34.603 11:43:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:34.603 11:43:05 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:34.603 11:43:05 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:34.603 11:43:05 -- host/identify.sh@14 -- # nvmftestinit 00:19:34.603 11:43:05 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:19:34.603 11:43:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.603 11:43:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:34.603 11:43:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:34.603 11:43:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:34.603 11:43:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.603 11:43:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.603 11:43:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.603 11:43:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:34.603 11:43:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:34.603 11:43:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:34.603 11:43:05 -- common/autotest_common.sh@10 -- # set +x 00:19:41.180 11:43:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:41.180 11:43:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:41.180 11:43:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:41.180 11:43:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:41.180 11:43:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:41.180 11:43:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:41.180 11:43:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:41.180 11:43:10 -- nvmf/common.sh@295 -- # net_devs=() 00:19:41.180 11:43:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:41.180 11:43:10 -- nvmf/common.sh@296 -- # e810=() 00:19:41.180 11:43:10 -- nvmf/common.sh@296 -- # local -ga e810 00:19:41.180 11:43:10 -- nvmf/common.sh@297 -- # x722=() 00:19:41.180 11:43:10 -- nvmf/common.sh@297 -- # local -ga x722 00:19:41.180 11:43:10 -- nvmf/common.sh@298 -- # mlx=() 00:19:41.181 11:43:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:41.181 11:43:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:41.181 11:43:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:41.181 11:43:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:41.181 11:43:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:41.181 11:43:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:41.181 11:43:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:41.181 11:43:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:41.181 11:43:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:41.181 11:43:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:41.181 11:43:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:41.181 11:43:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:41.181 11:43:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:41.181 11:43:10 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:41.181 11:43:10 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:41.181 11:43:10 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:41.181 11:43:10 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:41.181 11:43:10 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:41.181 11:43:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:41.181 11:43:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:41.181 11:43:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:41.181 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:41.181 11:43:10 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:41.181 11:43:10 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:41.181 11:43:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:41.181 11:43:10 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:41.181 11:43:10 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:41.181 11:43:10 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:41.181 11:43:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:41.181 11:43:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:41.181 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:41.181 11:43:10 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:41.181 11:43:10 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:41.181 11:43:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:41.181 11:43:10 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:41.181 11:43:10 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:41.181 11:43:10 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:41.181 11:43:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:41.181 11:43:10 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:41.181 11:43:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:41.181 11:43:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.181 11:43:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:41.181 11:43:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.181 11:43:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:41.181 Found net devices under 0000:18:00.0: mlx_0_0 00:19:41.181 11:43:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.181 11:43:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:41.181 11:43:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.181 11:43:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:41.181 11:43:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.181 11:43:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:41.181 Found net devices under 0000:18:00.1: mlx_0_1 00:19:41.181 11:43:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.181 11:43:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:41.181 11:43:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:41.181 11:43:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:41.181 11:43:10 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:41.181 11:43:10 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:41.181 11:43:10 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:41.181 11:43:10 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:41.181 11:43:10 -- nvmf/common.sh@58 -- # uname 00:19:41.181 11:43:10 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:41.181 11:43:10 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:41.181 11:43:10 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:41.181 11:43:10 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:41.181 11:43:10 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:41.181 11:43:10 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:41.181 11:43:10 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:41.181 11:43:10 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:41.181 11:43:10 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:41.181 11:43:10 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:41.181 11:43:10 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:41.181 11:43:10 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:41.181 11:43:10 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:41.181 11:43:10 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:41.181 11:43:11 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:41.181 11:43:11 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:41.181 11:43:11 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:41.181 11:43:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.181 11:43:11 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:41.181 11:43:11 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:41.181 11:43:11 -- nvmf/common.sh@105 -- # continue 2 00:19:41.181 11:43:11 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:41.181 11:43:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.181 11:43:11 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:41.181 11:43:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.181 11:43:11 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:41.181 11:43:11 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:41.181 11:43:11 -- nvmf/common.sh@105 -- # continue 2 00:19:41.181 11:43:11 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:41.181 11:43:11 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:41.181 11:43:11 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:41.181 11:43:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:41.181 11:43:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:41.181 11:43:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:41.181 11:43:11 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:41.181 11:43:11 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:41.181 11:43:11 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:41.181 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:41.181 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:19:41.181 altname enp24s0f0np0 00:19:41.181 altname ens785f0np0 00:19:41.181 inet 192.168.100.8/24 scope global mlx_0_0 00:19:41.181 valid_lft forever preferred_lft forever 00:19:41.181 11:43:11 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:41.181 11:43:11 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:41.181 11:43:11 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:41.181 11:43:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:41.181 11:43:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:41.181 11:43:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:41.181 11:43:11 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:41.181 11:43:11 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:41.181 11:43:11 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:41.181 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:41.181 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:19:41.181 altname enp24s0f1np1 00:19:41.181 altname ens785f1np1 00:19:41.181 inet 192.168.100.9/24 scope global mlx_0_1 00:19:41.181 valid_lft forever preferred_lft forever 00:19:41.181 11:43:11 -- nvmf/common.sh@411 -- # return 0 00:19:41.181 11:43:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:41.181 11:43:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:41.181 11:43:11 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:41.181 11:43:11 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:41.181 11:43:11 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:41.181 11:43:11 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:41.181 11:43:11 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:41.181 11:43:11 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:41.181 11:43:11 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:41.181 11:43:11 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:41.181 11:43:11 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:41.181 11:43:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.181 11:43:11 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:41.181 11:43:11 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:41.181 11:43:11 -- nvmf/common.sh@105 -- # continue 2 00:19:41.181 11:43:11 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:41.182 11:43:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.182 11:43:11 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:41.182 11:43:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.182 11:43:11 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:41.182 11:43:11 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:41.182 11:43:11 -- nvmf/common.sh@105 -- # continue 2 00:19:41.182 11:43:11 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:41.182 11:43:11 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:41.182 11:43:11 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:41.182 11:43:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:41.182 11:43:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:41.182 11:43:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:41.182 11:43:11 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:41.182 11:43:11 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:41.182 11:43:11 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:41.182 11:43:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:41.182 11:43:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:41.182 11:43:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:41.182 11:43:11 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:41.182 192.168.100.9' 00:19:41.182 11:43:11 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:41.182 192.168.100.9' 00:19:41.182 11:43:11 -- nvmf/common.sh@446 -- # head -n 1 00:19:41.182 11:43:11 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:41.182 11:43:11 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:41.182 192.168.100.9' 00:19:41.182 11:43:11 -- nvmf/common.sh@447 -- # head -n 1 00:19:41.182 11:43:11 -- nvmf/common.sh@447 -- # tail -n +2 00:19:41.182 11:43:11 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:41.182 11:43:11 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:41.182 11:43:11 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:41.182 11:43:11 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:41.182 11:43:11 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:41.182 11:43:11 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:41.182 11:43:11 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:41.182 11:43:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:41.182 11:43:11 -- common/autotest_common.sh@10 -- # set +x 00:19:41.182 11:43:11 -- host/identify.sh@19 -- # nvmfpid=3076298 00:19:41.182 11:43:11 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:41.182 11:43:11 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:41.182 11:43:11 -- host/identify.sh@23 -- # waitforlisten 3076298 00:19:41.182 11:43:11 -- common/autotest_common.sh@827 -- # '[' -z 3076298 ']' 00:19:41.182 11:43:11 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.182 11:43:11 -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:41.182 11:43:11 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.182 11:43:11 -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:41.182 11:43:11 -- common/autotest_common.sh@10 -- # set +x 00:19:41.182 [2024-05-15 11:43:11.236612] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:19:41.182 [2024-05-15 11:43:11.236671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.182 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.182 [2024-05-15 11:43:11.309993] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:41.182 [2024-05-15 11:43:11.400071] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.182 [2024-05-15 11:43:11.400115] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.182 [2024-05-15 11:43:11.400125] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.182 [2024-05-15 11:43:11.400134] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.182 [2024-05-15 11:43:11.400141] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.182 [2024-05-15 11:43:11.400192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.182 [2024-05-15 11:43:11.400213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.182 [2024-05-15 11:43:11.400289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:41.182 [2024-05-15 11:43:11.400291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.443 11:43:12 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:41.443 11:43:12 -- common/autotest_common.sh@860 -- # return 0 00:19:41.443 11:43:12 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:41.443 11:43:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.443 11:43:12 -- common/autotest_common.sh@10 -- # set +x 00:19:41.443 [2024-05-15 11:43:12.094841] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21a4f00/0x21a93f0) succeed. 00:19:41.443 [2024-05-15 11:43:12.105512] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21a6540/0x21eaa80) succeed. 00:19:41.704 11:43:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.704 11:43:12 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:41.704 11:43:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:41.704 11:43:12 -- common/autotest_common.sh@10 -- # set +x 00:19:41.704 11:43:12 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:41.704 11:43:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.704 11:43:12 -- common/autotest_common.sh@10 -- # set +x 00:19:41.704 Malloc0 00:19:41.704 11:43:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.704 11:43:12 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:41.704 11:43:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.704 11:43:12 -- common/autotest_common.sh@10 -- # set +x 00:19:41.704 11:43:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.704 11:43:12 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:41.704 11:43:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.704 11:43:12 -- common/autotest_common.sh@10 -- # set +x 00:19:41.704 11:43:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.704 11:43:12 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:41.704 11:43:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.704 11:43:12 -- common/autotest_common.sh@10 -- # set +x 00:19:41.704 [2024-05-15 11:43:12.332046] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:41.704 [2024-05-15 11:43:12.332412] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:41.704 11:43:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.704 11:43:12 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:41.704 11:43:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.704 11:43:12 -- common/autotest_common.sh@10 -- # set +x 00:19:41.704 11:43:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.704 11:43:12 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:41.704 11:43:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.704 11:43:12 -- common/autotest_common.sh@10 -- # set +x 00:19:41.704 [ 00:19:41.704 { 00:19:41.704 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:41.704 "subtype": "Discovery", 00:19:41.704 "listen_addresses": [ 00:19:41.704 { 00:19:41.705 "trtype": "RDMA", 00:19:41.705 "adrfam": "IPv4", 00:19:41.705 "traddr": "192.168.100.8", 00:19:41.705 "trsvcid": "4420" 00:19:41.705 } 00:19:41.705 ], 00:19:41.705 "allow_any_host": true, 00:19:41.705 "hosts": [] 00:19:41.705 }, 00:19:41.705 { 00:19:41.705 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.705 "subtype": "NVMe", 00:19:41.705 "listen_addresses": [ 00:19:41.705 { 00:19:41.705 "trtype": "RDMA", 00:19:41.705 "adrfam": "IPv4", 00:19:41.705 "traddr": "192.168.100.8", 00:19:41.705 "trsvcid": "4420" 00:19:41.705 } 00:19:41.705 ], 00:19:41.705 "allow_any_host": true, 00:19:41.705 "hosts": [], 00:19:41.705 "serial_number": "SPDK00000000000001", 00:19:41.705 "model_number": "SPDK bdev Controller", 00:19:41.705 "max_namespaces": 32, 00:19:41.705 "min_cntlid": 1, 00:19:41.705 "max_cntlid": 65519, 00:19:41.705 "namespaces": [ 00:19:41.705 { 00:19:41.705 "nsid": 1, 00:19:41.705 "bdev_name": "Malloc0", 00:19:41.705 "name": "Malloc0", 00:19:41.705 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:41.705 "eui64": "ABCDEF0123456789", 00:19:41.705 "uuid": "722afa01-3555-47c9-9b5d-a7554e2ac772" 00:19:41.705 } 00:19:41.705 ] 00:19:41.705 } 00:19:41.705 ] 00:19:41.705 11:43:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.705 11:43:12 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:41.705 [2024-05-15 11:43:12.391351] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:19:41.705 [2024-05-15 11:43:12.391402] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076503 ] 00:19:41.705 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.705 [2024-05-15 11:43:12.441227] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:41.705 [2024-05-15 11:43:12.441304] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:19:41.705 [2024-05-15 11:43:12.441318] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:19:41.705 [2024-05-15 11:43:12.441323] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:19:41.705 [2024-05-15 11:43:12.441357] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:41.705 [2024-05-15 11:43:12.460431] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:19:41.972 [2024-05-15 11:43:12.471333] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:41.972 [2024-05-15 11:43:12.471345] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:19:41.972 [2024-05-15 11:43:12.471353] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471361] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471368] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471375] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471381] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471388] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471394] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471400] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471407] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471413] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471419] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471426] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471432] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471439] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471445] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471451] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471458] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471464] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471471] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182e00 00:19:41.972 [2024-05-15 11:43:12.471477] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.471483] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.471493] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.471499] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.471506] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.471512] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.471518] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.471525] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.471531] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.471538] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.471544] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.471550] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.471556] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:19:41.973 [2024-05-15 11:43:12.471563] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:41.973 [2024-05-15 11:43:12.471567] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:19:41.973 [2024-05-15 11:43:12.471588] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.471603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x182e00 00:19:41.973 [2024-05-15 11:43:12.476061] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.973 [2024-05-15 11:43:12.476070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:41.973 [2024-05-15 11:43:12.476079] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476086] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:41.973 [2024-05-15 11:43:12.476093] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:41.973 [2024-05-15 11:43:12.476100] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:41.973 [2024-05-15 11:43:12.476119] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.973 [2024-05-15 11:43:12.476152] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.973 [2024-05-15 11:43:12.476157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:19:41.973 [2024-05-15 11:43:12.476167] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:41.973 [2024-05-15 11:43:12.476173] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476180] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:41.973 [2024-05-15 11:43:12.476188] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.973 [2024-05-15 11:43:12.476215] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.973 [2024-05-15 11:43:12.476221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:19:41.973 [2024-05-15 11:43:12.476228] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:41.973 [2024-05-15 11:43:12.476235] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476242] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:41.973 [2024-05-15 11:43:12.476250] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.973 [2024-05-15 11:43:12.476275] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.973 [2024-05-15 11:43:12.476280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:41.973 [2024-05-15 11:43:12.476287] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:41.973 [2024-05-15 11:43:12.476293] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476302] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.973 [2024-05-15 11:43:12.476325] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.973 [2024-05-15 11:43:12.476330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:41.973 [2024-05-15 11:43:12.476337] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:41.973 [2024-05-15 11:43:12.476343] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:41.973 [2024-05-15 11:43:12.476349] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476356] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:41.973 [2024-05-15 11:43:12.476463] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:41.973 [2024-05-15 11:43:12.476469] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:41.973 [2024-05-15 11:43:12.476478] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.973 [2024-05-15 11:43:12.476505] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.973 [2024-05-15 11:43:12.476510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:41.973 [2024-05-15 11:43:12.476517] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:41.973 [2024-05-15 11:43:12.476523] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476533] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.973 [2024-05-15 11:43:12.476558] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.973 [2024-05-15 11:43:12.476564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:41.973 [2024-05-15 11:43:12.476570] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:41.973 [2024-05-15 11:43:12.476576] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:41.973 [2024-05-15 11:43:12.476582] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476590] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:41.973 [2024-05-15 11:43:12.476599] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:41.973 [2024-05-15 11:43:12.476609] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182e00 00:19:41.973 [2024-05-15 11:43:12.476653] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.973 [2024-05-15 11:43:12.476659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:41.973 [2024-05-15 11:43:12.476668] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:41.973 [2024-05-15 11:43:12.476677] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:41.973 [2024-05-15 11:43:12.476682] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:41.973 [2024-05-15 11:43:12.476689] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:41.973 [2024-05-15 11:43:12.476695] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:41.973 [2024-05-15 11:43:12.476701] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:41.973 [2024-05-15 11:43:12.476707] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476714] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:41.973 [2024-05-15 11:43:12.476722] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.973 [2024-05-15 11:43:12.476756] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.973 [2024-05-15 11:43:12.476762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:41.973 [2024-05-15 11:43:12.476771] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.973 [2024-05-15 11:43:12.476787] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.973 [2024-05-15 11:43:12.476801] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.973 [2024-05-15 11:43:12.476815] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.973 [2024-05-15 11:43:12.476828] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:41.973 [2024-05-15 11:43:12.476834] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476845] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:41.973 [2024-05-15 11:43:12.476853] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.973 [2024-05-15 11:43:12.476861] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.973 [2024-05-15 11:43:12.476883] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.973 [2024-05-15 11:43:12.476889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:19:41.973 [2024-05-15 11:43:12.476896] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:41.974 [2024-05-15 11:43:12.476902] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:41.974 [2024-05-15 11:43:12.476908] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182e00 00:19:41.974 [2024-05-15 11:43:12.476917] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.974 [2024-05-15 11:43:12.476925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182e00 00:19:41.974 [2024-05-15 11:43:12.476952] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.974 [2024-05-15 11:43:12.476958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:41.974 [2024-05-15 11:43:12.476965] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182e00 00:19:41.974 [2024-05-15 11:43:12.476975] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:41.974 [2024-05-15 11:43:12.476999] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.974 [2024-05-15 11:43:12.477007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x182e00 00:19:41.974 [2024-05-15 11:43:12.477015] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182e00 00:19:41.974 [2024-05-15 11:43:12.477022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.974 [2024-05-15 11:43:12.477037] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.974 [2024-05-15 11:43:12.477044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:41.974 [2024-05-15 11:43:12.477060] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x182e00 00:19:41.974 [2024-05-15 11:43:12.477068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x182e00 00:19:41.974 [2024-05-15 11:43:12.477075] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182e00 00:19:41.974 [2024-05-15 11:43:12.477081] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.974 [2024-05-15 11:43:12.477087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:41.974 [2024-05-15 11:43:12.477093] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182e00 00:19:41.974 [2024-05-15 11:43:12.477100] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.974 [2024-05-15 11:43:12.477105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:41.974 [2024-05-15 11:43:12.477115] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182e00 00:19:41.974 [2024-05-15 11:43:12.477123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x182e00 00:19:41.974 [2024-05-15 11:43:12.477129] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182e00 00:19:41.974 [2024-05-15 11:43:12.477148] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.974 [2024-05-15 11:43:12.477153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:41.974 [2024-05-15 11:43:12.477164] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182e00 00:19:41.974 ===================================================== 00:19:41.974 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:41.974 ===================================================== 00:19:41.974 Controller Capabilities/Features 00:19:41.974 ================================ 00:19:41.974 Vendor ID: 0000 00:19:41.974 Subsystem Vendor ID: 0000 00:19:41.974 Serial Number: .................... 00:19:41.974 Model Number: ........................................ 00:19:41.974 Firmware Version: 24.05 00:19:41.974 Recommended Arb Burst: 0 00:19:41.974 IEEE OUI Identifier: 00 00 00 00:19:41.974 Multi-path I/O 00:19:41.974 May have multiple subsystem ports: No 00:19:41.974 May have multiple controllers: No 00:19:41.974 Associated with SR-IOV VF: No 00:19:41.974 Max Data Transfer Size: 131072 00:19:41.974 Max Number of Namespaces: 0 00:19:41.974 Max Number of I/O Queues: 1024 00:19:41.974 NVMe Specification Version (VS): 1.3 00:19:41.974 NVMe Specification Version (Identify): 1.3 00:19:41.974 Maximum Queue Entries: 128 00:19:41.974 Contiguous Queues Required: Yes 00:19:41.974 Arbitration Mechanisms Supported 00:19:41.974 Weighted Round Robin: Not Supported 00:19:41.974 Vendor Specific: Not Supported 00:19:41.974 Reset Timeout: 15000 ms 00:19:41.974 Doorbell Stride: 4 bytes 00:19:41.974 NVM Subsystem Reset: Not Supported 00:19:41.974 Command Sets Supported 00:19:41.974 NVM Command Set: Supported 00:19:41.974 Boot Partition: Not Supported 00:19:41.974 Memory Page Size Minimum: 4096 bytes 00:19:41.974 Memory Page Size Maximum: 4096 bytes 00:19:41.974 Persistent Memory Region: Not Supported 00:19:41.974 Optional Asynchronous Events Supported 00:19:41.974 Namespace Attribute Notices: Not Supported 00:19:41.974 Firmware Activation Notices: Not Supported 00:19:41.974 ANA Change Notices: Not Supported 00:19:41.974 PLE Aggregate Log Change Notices: Not Supported 00:19:41.974 LBA Status Info Alert Notices: Not Supported 00:19:41.974 EGE Aggregate Log Change Notices: Not Supported 00:19:41.974 Normal NVM Subsystem Shutdown event: Not Supported 00:19:41.974 Zone Descriptor Change Notices: Not Supported 00:19:41.974 Discovery Log Change Notices: Supported 00:19:41.974 Controller Attributes 00:19:41.974 128-bit Host Identifier: Not Supported 00:19:41.974 Non-Operational Permissive Mode: Not Supported 00:19:41.974 NVM Sets: Not Supported 00:19:41.974 Read Recovery Levels: Not Supported 00:19:41.974 Endurance Groups: Not Supported 00:19:41.974 Predictable Latency Mode: Not Supported 00:19:41.974 Traffic Based Keep ALive: Not Supported 00:19:41.974 Namespace Granularity: Not Supported 00:19:41.974 SQ Associations: Not Supported 00:19:41.974 UUID List: Not Supported 00:19:41.974 Multi-Domain Subsystem: Not Supported 00:19:41.974 Fixed Capacity Management: Not Supported 00:19:41.974 Variable Capacity Management: Not Supported 00:19:41.974 Delete Endurance Group: Not Supported 00:19:41.974 Delete NVM Set: Not Supported 00:19:41.974 Extended LBA Formats Supported: Not Supported 00:19:41.974 Flexible Data Placement Supported: Not Supported 00:19:41.974 00:19:41.974 Controller Memory Buffer Support 00:19:41.974 ================================ 00:19:41.974 Supported: No 00:19:41.974 00:19:41.974 Persistent Memory Region Support 00:19:41.974 ================================ 00:19:41.974 Supported: No 00:19:41.974 00:19:41.974 Admin Command Set Attributes 00:19:41.974 ============================ 00:19:41.974 Security Send/Receive: Not Supported 00:19:41.974 Format NVM: Not Supported 00:19:41.974 Firmware Activate/Download: Not Supported 00:19:41.974 Namespace Management: Not Supported 00:19:41.974 Device Self-Test: Not Supported 00:19:41.974 Directives: Not Supported 00:19:41.974 NVMe-MI: Not Supported 00:19:41.974 Virtualization Management: Not Supported 00:19:41.974 Doorbell Buffer Config: Not Supported 00:19:41.974 Get LBA Status Capability: Not Supported 00:19:41.974 Command & Feature Lockdown Capability: Not Supported 00:19:41.974 Abort Command Limit: 1 00:19:41.974 Async Event Request Limit: 4 00:19:41.974 Number of Firmware Slots: N/A 00:19:41.974 Firmware Slot 1 Read-Only: N/A 00:19:41.974 Firmware Activation Without Reset: N/A 00:19:41.974 Multiple Update Detection Support: N/A 00:19:41.974 Firmware Update Granularity: No Information Provided 00:19:41.974 Per-Namespace SMART Log: No 00:19:41.974 Asymmetric Namespace Access Log Page: Not Supported 00:19:41.974 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:41.974 Command Effects Log Page: Not Supported 00:19:41.974 Get Log Page Extended Data: Supported 00:19:41.974 Telemetry Log Pages: Not Supported 00:19:41.974 Persistent Event Log Pages: Not Supported 00:19:41.974 Supported Log Pages Log Page: May Support 00:19:41.974 Commands Supported & Effects Log Page: Not Supported 00:19:41.974 Feature Identifiers & Effects Log Page:May Support 00:19:41.974 NVMe-MI Commands & Effects Log Page: May Support 00:19:41.974 Data Area 4 for Telemetry Log: Not Supported 00:19:41.974 Error Log Page Entries Supported: 128 00:19:41.975 Keep Alive: Not Supported 00:19:41.975 00:19:41.975 NVM Command Set Attributes 00:19:41.975 ========================== 00:19:41.975 Submission Queue Entry Size 00:19:41.975 Max: 1 00:19:41.975 Min: 1 00:19:41.975 Completion Queue Entry Size 00:19:41.975 Max: 1 00:19:41.975 Min: 1 00:19:41.975 Number of Namespaces: 0 00:19:41.975 Compare Command: Not Supported 00:19:41.975 Write Uncorrectable Command: Not Supported 00:19:41.975 Dataset Management Command: Not Supported 00:19:41.975 Write Zeroes Command: Not Supported 00:19:41.975 Set Features Save Field: Not Supported 00:19:41.975 Reservations: Not Supported 00:19:41.975 Timestamp: Not Supported 00:19:41.975 Copy: Not Supported 00:19:41.975 Volatile Write Cache: Not Present 00:19:41.975 Atomic Write Unit (Normal): 1 00:19:41.975 Atomic Write Unit (PFail): 1 00:19:41.975 Atomic Compare & Write Unit: 1 00:19:41.975 Fused Compare & Write: Supported 00:19:41.975 Scatter-Gather List 00:19:41.975 SGL Command Set: Supported 00:19:41.975 SGL Keyed: Supported 00:19:41.975 SGL Bit Bucket Descriptor: Not Supported 00:19:41.975 SGL Metadata Pointer: Not Supported 00:19:41.975 Oversized SGL: Not Supported 00:19:41.975 SGL Metadata Address: Not Supported 00:19:41.975 SGL Offset: Supported 00:19:41.975 Transport SGL Data Block: Not Supported 00:19:41.975 Replay Protected Memory Block: Not Supported 00:19:41.975 00:19:41.975 Firmware Slot Information 00:19:41.975 ========================= 00:19:41.975 Active slot: 0 00:19:41.975 00:19:41.975 00:19:41.975 Error Log 00:19:41.975 ========= 00:19:41.975 00:19:41.975 Active Namespaces 00:19:41.975 ================= 00:19:41.975 Discovery Log Page 00:19:41.975 ================== 00:19:41.975 Generation Counter: 2 00:19:41.975 Number of Records: 2 00:19:41.975 Record Format: 0 00:19:41.975 00:19:41.975 Discovery Log Entry 0 00:19:41.975 ---------------------- 00:19:41.975 Transport Type: 1 (RDMA) 00:19:41.975 Address Family: 1 (IPv4) 00:19:41.975 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:41.975 Entry Flags: 00:19:41.975 Duplicate Returned Information: 1 00:19:41.975 Explicit Persistent Connection Support for Discovery: 1 00:19:41.975 Transport Requirements: 00:19:41.975 Secure Channel: Not Required 00:19:41.975 Port ID: 0 (0x0000) 00:19:41.975 Controller ID: 65535 (0xffff) 00:19:41.975 Admin Max SQ Size: 128 00:19:41.975 Transport Service Identifier: 4420 00:19:41.975 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:41.975 Transport Address: 192.168.100.8 00:19:41.975 Transport Specific Address Subtype - RDMA 00:19:41.975 RDMA QP Service Type: 1 (Reliable Connected) 00:19:41.975 RDMA Provider Type: 1 (No provider specified) 00:19:41.975 RDMA CM Service: 1 (RDMA_CM) 00:19:41.975 Discovery Log Entry 1 00:19:41.975 ---------------------- 00:19:41.975 Transport Type: 1 (RDMA) 00:19:41.975 Address Family: 1 (IPv4) 00:19:41.975 Subsystem Type: 2 (NVM Subsystem) 00:19:41.975 Entry Flags: 00:19:41.975 Duplicate Returned Information: 0 00:19:41.975 Explicit Persistent Connection Support for Discovery: 0 00:19:41.975 Transport Requirements: 00:19:41.975 Secure Channel: Not Required 00:19:41.975 Port ID: 0 (0x0000) 00:19:41.975 Controller ID: 65535 (0xffff) 00:19:41.975 Admin Max SQ Size: [2024-05-15 11:43:12.477241] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:41.975 [2024-05-15 11:43:12.477251] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 36004 doesn't match qid 00:19:41.975 [2024-05-15 11:43:12.477265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32741 cdw0:5 sqhd:da10 p:0 m:0 dnr:0 00:19:41.975 [2024-05-15 11:43:12.477272] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 36004 doesn't match qid 00:19:41.975 [2024-05-15 11:43:12.477281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32741 cdw0:5 sqhd:da10 p:0 m:0 dnr:0 00:19:41.975 [2024-05-15 11:43:12.477287] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 36004 doesn't match qid 00:19:41.975 [2024-05-15 11:43:12.477295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32741 cdw0:5 sqhd:da10 p:0 m:0 dnr:0 00:19:41.975 [2024-05-15 11:43:12.477302] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 36004 doesn't match qid 00:19:41.975 [2024-05-15 11:43:12.477309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32741 cdw0:5 sqhd:da10 p:0 m:0 dnr:0 00:19:41.975 [2024-05-15 11:43:12.477321] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182e00 00:19:41.975 [2024-05-15 11:43:12.477329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.975 [2024-05-15 11:43:12.477344] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.975 [2024-05-15 11:43:12.477350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:19:41.975 [2024-05-15 11:43:12.477359] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.975 [2024-05-15 11:43:12.477368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.975 [2024-05-15 11:43:12.477374] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182e00 00:19:41.975 [2024-05-15 11:43:12.477391] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.975 [2024-05-15 11:43:12.477397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:41.975 [2024-05-15 11:43:12.477403] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:41.975 [2024-05-15 11:43:12.477409] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:41.975 [2024-05-15 11:43:12.477416] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182e00 00:19:41.975 [2024-05-15 11:43:12.477424] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.975 [2024-05-15 11:43:12.477432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.975 [2024-05-15 11:43:12.477452] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.975 [2024-05-15 11:43:12.477458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:19:41.975 [2024-05-15 11:43:12.477464] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182e00 00:19:41.975 [2024-05-15 11:43:12.477474] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.975 [2024-05-15 11:43:12.477482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.975 [2024-05-15 11:43:12.477503] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.975 [2024-05-15 11:43:12.477509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:19:41.975 [2024-05-15 11:43:12.477515] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182e00 00:19:41.975 [2024-05-15 11:43:12.477524] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.975 [2024-05-15 11:43:12.477532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.975 [2024-05-15 11:43:12.477554] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.975 [2024-05-15 11:43:12.477560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:19:41.975 [2024-05-15 11:43:12.477567] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182e00 00:19:41.975 [2024-05-15 11:43:12.477575] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.975 [2024-05-15 11:43:12.477583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.975 [2024-05-15 11:43:12.477602] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.975 [2024-05-15 11:43:12.477607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:19:41.975 [2024-05-15 11:43:12.477614] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182e00 00:19:41.975 [2024-05-15 11:43:12.477623] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.975 [2024-05-15 11:43:12.477631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.976 [2024-05-15 11:43:12.477654] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.976 [2024-05-15 11:43:12.477660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:19:41.976 [2024-05-15 11:43:12.477667] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.477675] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.477684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.976 [2024-05-15 11:43:12.477703] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.976 [2024-05-15 11:43:12.477709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:19:41.976 [2024-05-15 11:43:12.477716] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.477724] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.477732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.976 [2024-05-15 11:43:12.477748] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.976 [2024-05-15 11:43:12.477754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:19:41.976 [2024-05-15 11:43:12.477760] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.477769] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.477777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.976 [2024-05-15 11:43:12.477801] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.976 [2024-05-15 11:43:12.477807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:19:41.976 [2024-05-15 11:43:12.477813] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.477822] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.477830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.976 [2024-05-15 11:43:12.477847] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.976 [2024-05-15 11:43:12.477853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:19:41.976 [2024-05-15 11:43:12.477860] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.477868] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.477876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.976 [2024-05-15 11:43:12.477892] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.976 [2024-05-15 11:43:12.477897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:41.976 [2024-05-15 11:43:12.477904] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.477913] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.477921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.976 [2024-05-15 11:43:12.477941] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.976 [2024-05-15 11:43:12.477947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:41.976 [2024-05-15 11:43:12.477954] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.477962] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.477970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.976 [2024-05-15 11:43:12.477990] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.976 [2024-05-15 11:43:12.477995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:41.976 [2024-05-15 11:43:12.478002] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478011] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.976 [2024-05-15 11:43:12.478034] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.976 [2024-05-15 11:43:12.478040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:41.976 [2024-05-15 11:43:12.478046] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478060] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.976 [2024-05-15 11:43:12.478086] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.976 [2024-05-15 11:43:12.478091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:41.976 [2024-05-15 11:43:12.478098] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478107] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.976 [2024-05-15 11:43:12.478136] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.976 [2024-05-15 11:43:12.478155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:41.976 [2024-05-15 11:43:12.478161] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478170] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.976 [2024-05-15 11:43:12.478196] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.976 [2024-05-15 11:43:12.478201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:41.976 [2024-05-15 11:43:12.478208] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478217] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.976 [2024-05-15 11:43:12.478242] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.976 [2024-05-15 11:43:12.478248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:41.976 [2024-05-15 11:43:12.478254] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478263] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.976 [2024-05-15 11:43:12.478290] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.976 [2024-05-15 11:43:12.478296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:41.976 [2024-05-15 11:43:12.478303] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478311] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.976 [2024-05-15 11:43:12.478337] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.976 [2024-05-15 11:43:12.478342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:41.976 [2024-05-15 11:43:12.478349] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478358] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.976 [2024-05-15 11:43:12.478389] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.976 [2024-05-15 11:43:12.478395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:41.976 [2024-05-15 11:43:12.478401] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478410] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.976 [2024-05-15 11:43:12.478418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.976 [2024-05-15 11:43:12.478434] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.976 [2024-05-15 11:43:12.478439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:19:41.976 [2024-05-15 11:43:12.478446] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478454] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.977 [2024-05-15 11:43:12.478484] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.977 [2024-05-15 11:43:12.478489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:41.977 [2024-05-15 11:43:12.478496] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478505] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.977 [2024-05-15 11:43:12.478534] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.977 [2024-05-15 11:43:12.478539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:19:41.977 [2024-05-15 11:43:12.478546] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478554] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.977 [2024-05-15 11:43:12.478578] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.977 [2024-05-15 11:43:12.478584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:19:41.977 [2024-05-15 11:43:12.478590] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478599] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.977 [2024-05-15 11:43:12.478628] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.977 [2024-05-15 11:43:12.478634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:19:41.977 [2024-05-15 11:43:12.478640] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478649] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.977 [2024-05-15 11:43:12.478673] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.977 [2024-05-15 11:43:12.478678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:19:41.977 [2024-05-15 11:43:12.478685] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478694] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.977 [2024-05-15 11:43:12.478723] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.977 [2024-05-15 11:43:12.478728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:19:41.977 [2024-05-15 11:43:12.478735] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478744] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.977 [2024-05-15 11:43:12.478769] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.977 [2024-05-15 11:43:12.478775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:19:41.977 [2024-05-15 11:43:12.478781] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478791] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.977 [2024-05-15 11:43:12.478817] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.977 [2024-05-15 11:43:12.478822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:19:41.977 [2024-05-15 11:43:12.478829] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478838] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.977 [2024-05-15 11:43:12.478867] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.977 [2024-05-15 11:43:12.478873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:19:41.977 [2024-05-15 11:43:12.478879] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478888] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.977 [2024-05-15 11:43:12.478919] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.977 [2024-05-15 11:43:12.478925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:19:41.977 [2024-05-15 11:43:12.478931] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478940] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.977 [2024-05-15 11:43:12.478971] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.977 [2024-05-15 11:43:12.478977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:19:41.977 [2024-05-15 11:43:12.478983] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.478992] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.479000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.977 [2024-05-15 11:43:12.479018] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.977 [2024-05-15 11:43:12.479023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:19:41.977 [2024-05-15 11:43:12.479030] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.479039] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.479046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.977 [2024-05-15 11:43:12.479069] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.977 [2024-05-15 11:43:12.479075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:19:41.977 [2024-05-15 11:43:12.479082] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182e00 00:19:41.977 [2024-05-15 11:43:12.479092] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.978 [2024-05-15 11:43:12.479117] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.978 [2024-05-15 11:43:12.479123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:19:41.978 [2024-05-15 11:43:12.479130] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479138] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.978 [2024-05-15 11:43:12.479162] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.978 [2024-05-15 11:43:12.479168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:19:41.978 [2024-05-15 11:43:12.479174] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479183] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.978 [2024-05-15 11:43:12.479212] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.978 [2024-05-15 11:43:12.479218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:19:41.978 [2024-05-15 11:43:12.479225] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479233] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.978 [2024-05-15 11:43:12.479257] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.978 [2024-05-15 11:43:12.479263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:19:41.978 [2024-05-15 11:43:12.479269] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479278] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.978 [2024-05-15 11:43:12.479303] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.978 [2024-05-15 11:43:12.479309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:19:41.978 [2024-05-15 11:43:12.479315] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479324] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.978 [2024-05-15 11:43:12.479348] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.978 [2024-05-15 11:43:12.479353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:19:41.978 [2024-05-15 11:43:12.479361] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479370] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.978 [2024-05-15 11:43:12.479394] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.978 [2024-05-15 11:43:12.479399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:19:41.978 [2024-05-15 11:43:12.479406] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479415] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.978 [2024-05-15 11:43:12.479438] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.978 [2024-05-15 11:43:12.479444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:41.978 [2024-05-15 11:43:12.479450] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479459] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.978 [2024-05-15 11:43:12.479490] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.978 [2024-05-15 11:43:12.479496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:41.978 [2024-05-15 11:43:12.479502] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479511] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.978 [2024-05-15 11:43:12.479538] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.978 [2024-05-15 11:43:12.479544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:41.978 [2024-05-15 11:43:12.479551] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479559] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.978 [2024-05-15 11:43:12.479591] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.978 [2024-05-15 11:43:12.479596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:41.978 [2024-05-15 11:43:12.479603] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479612] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.978 [2024-05-15 11:43:12.479641] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.978 [2024-05-15 11:43:12.479646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:41.978 [2024-05-15 11:43:12.479654] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479663] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.978 [2024-05-15 11:43:12.479689] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.978 [2024-05-15 11:43:12.479694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:41.978 [2024-05-15 11:43:12.479701] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479710] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.978 [2024-05-15 11:43:12.479737] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.978 [2024-05-15 11:43:12.479742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:41.978 [2024-05-15 11:43:12.479749] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479758] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.978 [2024-05-15 11:43:12.479785] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.978 [2024-05-15 11:43:12.479791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:41.978 [2024-05-15 11:43:12.479797] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479806] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.978 [2024-05-15 11:43:12.479829] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.978 [2024-05-15 11:43:12.479835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:41.978 [2024-05-15 11:43:12.479842] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479850] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.978 [2024-05-15 11:43:12.479858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.978 [2024-05-15 11:43:12.479876] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.979 [2024-05-15 11:43:12.479881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:41.979 [2024-05-15 11:43:12.479888] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.479897] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.479904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.979 [2024-05-15 11:43:12.479928] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.979 [2024-05-15 11:43:12.479935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:41.979 [2024-05-15 11:43:12.479941] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.479950] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.479958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.979 [2024-05-15 11:43:12.479975] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.979 [2024-05-15 11:43:12.479981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:19:41.979 [2024-05-15 11:43:12.479987] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.479996] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.480004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.979 [2024-05-15 11:43:12.480024] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.979 [2024-05-15 11:43:12.480029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:41.979 [2024-05-15 11:43:12.480036] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.480045] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.480052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.979 [2024-05-15 11:43:12.484067] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.979 [2024-05-15 11:43:12.484073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:19:41.979 [2024-05-15 11:43:12.484080] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.484089] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.484096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.979 [2024-05-15 11:43:12.484114] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.979 [2024-05-15 11:43:12.484120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0009 p:0 m:0 dnr:0 00:19:41.979 [2024-05-15 11:43:12.484127] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.484134] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:19:41.979 128 00:19:41.979 Transport Service Identifier: 4420 00:19:41.979 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:41.979 Transport Address: 192.168.100.8 00:19:41.979 Transport Specific Address Subtype - RDMA 00:19:41.979 RDMA QP Service Type: 1 (Reliable Connected) 00:19:41.979 RDMA Provider Type: 1 (No provider specified) 00:19:41.979 RDMA CM Service: 1 (RDMA_CM) 00:19:41.979 11:43:12 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:41.979 [2024-05-15 11:43:12.557839] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:19:41.979 [2024-05-15 11:43:12.557881] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076506 ] 00:19:41.979 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.979 [2024-05-15 11:43:12.605222] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:41.979 [2024-05-15 11:43:12.605298] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:19:41.979 [2024-05-15 11:43:12.605312] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:19:41.979 [2024-05-15 11:43:12.605317] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:19:41.979 [2024-05-15 11:43:12.605345] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:41.979 [2024-05-15 11:43:12.616518] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:19:41.979 [2024-05-15 11:43:12.626784] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:41.979 [2024-05-15 11:43:12.626794] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:19:41.979 [2024-05-15 11:43:12.626801] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626809] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626816] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626822] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626829] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626835] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626842] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626848] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626855] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626861] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626867] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626874] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626880] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626887] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626893] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626900] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626906] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626913] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626919] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626926] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626932] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626942] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626948] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626955] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626961] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626968] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626974] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626981] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626987] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.626994] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.627000] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.627006] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:19:41.979 [2024-05-15 11:43:12.627012] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:41.979 [2024-05-15 11:43:12.627017] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:19:41.979 [2024-05-15 11:43:12.627035] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.627048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x182e00 00:19:41.979 [2024-05-15 11:43:12.632061] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.979 [2024-05-15 11:43:12.632070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:41.979 [2024-05-15 11:43:12.632078] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.632086] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:41.979 [2024-05-15 11:43:12.632092] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:41.979 [2024-05-15 11:43:12.632099] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:41.979 [2024-05-15 11:43:12.632114] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.632123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.979 [2024-05-15 11:43:12.632142] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.979 [2024-05-15 11:43:12.632148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:19:41.979 [2024-05-15 11:43:12.632157] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:41.979 [2024-05-15 11:43:12.632163] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182e00 00:19:41.979 [2024-05-15 11:43:12.632170] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:41.980 [2024-05-15 11:43:12.632178] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.980 [2024-05-15 11:43:12.632205] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.980 [2024-05-15 11:43:12.632211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:19:41.980 [2024-05-15 11:43:12.632218] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:41.980 [2024-05-15 11:43:12.632224] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632232] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:41.980 [2024-05-15 11:43:12.632239] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.980 [2024-05-15 11:43:12.632265] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.980 [2024-05-15 11:43:12.632271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:41.980 [2024-05-15 11:43:12.632278] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:41.980 [2024-05-15 11:43:12.632284] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632293] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.980 [2024-05-15 11:43:12.632317] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.980 [2024-05-15 11:43:12.632323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:41.980 [2024-05-15 11:43:12.632329] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:41.980 [2024-05-15 11:43:12.632336] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:41.980 [2024-05-15 11:43:12.632342] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632349] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:41.980 [2024-05-15 11:43:12.632456] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:41.980 [2024-05-15 11:43:12.632461] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:41.980 [2024-05-15 11:43:12.632470] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.980 [2024-05-15 11:43:12.632496] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.980 [2024-05-15 11:43:12.632502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:41.980 [2024-05-15 11:43:12.632508] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:41.980 [2024-05-15 11:43:12.632514] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632523] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.980 [2024-05-15 11:43:12.632549] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.980 [2024-05-15 11:43:12.632555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:41.980 [2024-05-15 11:43:12.632562] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:41.980 [2024-05-15 11:43:12.632568] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:41.980 [2024-05-15 11:43:12.632574] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632581] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:41.980 [2024-05-15 11:43:12.632593] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:41.980 [2024-05-15 11:43:12.632603] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182e00 00:19:41.980 [2024-05-15 11:43:12.632647] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.980 [2024-05-15 11:43:12.632653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:41.980 [2024-05-15 11:43:12.632661] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:41.980 [2024-05-15 11:43:12.632669] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:41.980 [2024-05-15 11:43:12.632675] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:41.980 [2024-05-15 11:43:12.632681] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:41.980 [2024-05-15 11:43:12.632687] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:41.980 [2024-05-15 11:43:12.632693] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:41.980 [2024-05-15 11:43:12.632699] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632707] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:41.980 [2024-05-15 11:43:12.632715] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.980 [2024-05-15 11:43:12.632747] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.980 [2024-05-15 11:43:12.632753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:41.980 [2024-05-15 11:43:12.632761] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.980 [2024-05-15 11:43:12.632776] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.980 [2024-05-15 11:43:12.632792] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.980 [2024-05-15 11:43:12.632806] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.980 [2024-05-15 11:43:12.632820] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:41.980 [2024-05-15 11:43:12.632826] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632836] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:41.980 [2024-05-15 11:43:12.632844] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632852] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.980 [2024-05-15 11:43:12.632874] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.980 [2024-05-15 11:43:12.632880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:19:41.980 [2024-05-15 11:43:12.632886] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:41.980 [2024-05-15 11:43:12.632893] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:41.980 [2024-05-15 11:43:12.632899] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632906] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:41.980 [2024-05-15 11:43:12.632914] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:41.980 [2024-05-15 11:43:12.632921] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.632929] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.980 [2024-05-15 11:43:12.632949] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.980 [2024-05-15 11:43:12.632955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:19:41.980 [2024-05-15 11:43:12.632999] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:41.980 [2024-05-15 11:43:12.633005] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.633013] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:41.980 [2024-05-15 11:43:12.633022] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.980 [2024-05-15 11:43:12.633031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182e00 00:19:41.980 [2024-05-15 11:43:12.633062] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.980 [2024-05-15 11:43:12.633069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:41.980 [2024-05-15 11:43:12.633080] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:41.980 [2024-05-15 11:43:12.633092] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:41.981 [2024-05-15 11:43:12.633099] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633107] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:41.981 [2024-05-15 11:43:12.633115] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182e00 00:19:41.981 [2024-05-15 11:43:12.633153] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.981 [2024-05-15 11:43:12.633159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:41.981 [2024-05-15 11:43:12.633172] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:41.981 [2024-05-15 11:43:12.633178] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633186] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:41.981 [2024-05-15 11:43:12.633195] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182e00 00:19:41.981 [2024-05-15 11:43:12.633229] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.981 [2024-05-15 11:43:12.633234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:41.981 [2024-05-15 11:43:12.633243] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:41.981 [2024-05-15 11:43:12.633249] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633257] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:41.981 [2024-05-15 11:43:12.633265] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:41.981 [2024-05-15 11:43:12.633273] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:41.981 [2024-05-15 11:43:12.633279] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:41.981 [2024-05-15 11:43:12.633286] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:41.981 [2024-05-15 11:43:12.633292] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:41.981 [2024-05-15 11:43:12.633298] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:41.981 [2024-05-15 11:43:12.633316] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.981 [2024-05-15 11:43:12.633334] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.981 [2024-05-15 11:43:12.633351] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.981 [2024-05-15 11:43:12.633357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:41.981 [2024-05-15 11:43:12.633364] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633370] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.981 [2024-05-15 11:43:12.633376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:41.981 [2024-05-15 11:43:12.633383] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633392] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.981 [2024-05-15 11:43:12.633417] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.981 [2024-05-15 11:43:12.633422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:41.981 [2024-05-15 11:43:12.633429] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633438] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.981 [2024-05-15 11:43:12.633469] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.981 [2024-05-15 11:43:12.633474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:41.981 [2024-05-15 11:43:12.633481] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633490] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.981 [2024-05-15 11:43:12.633518] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.981 [2024-05-15 11:43:12.633523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:19:41.981 [2024-05-15 11:43:12.633530] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633541] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x182e00 00:19:41.981 [2024-05-15 11:43:12.633557] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x182e00 00:19:41.981 [2024-05-15 11:43:12.633577] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x182e00 00:19:41.981 [2024-05-15 11:43:12.633594] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x182e00 00:19:41.981 [2024-05-15 11:43:12.633610] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.981 [2024-05-15 11:43:12.633616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:41.981 [2024-05-15 11:43:12.633627] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633634] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.981 [2024-05-15 11:43:12.633639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:41.981 [2024-05-15 11:43:12.633649] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633656] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.981 [2024-05-15 11:43:12.633661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:41.981 [2024-05-15 11:43:12.633670] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182e00 00:19:41.981 [2024-05-15 11:43:12.633677] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.981 [2024-05-15 11:43:12.633682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:41.981 [2024-05-15 11:43:12.633692] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182e00 00:19:41.981 ===================================================== 00:19:41.981 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:41.981 ===================================================== 00:19:41.981 Controller Capabilities/Features 00:19:41.981 ================================ 00:19:41.981 Vendor ID: 8086 00:19:41.981 Subsystem Vendor ID: 8086 00:19:41.981 Serial Number: SPDK00000000000001 00:19:41.981 Model Number: SPDK bdev Controller 00:19:41.981 Firmware Version: 24.05 00:19:41.981 Recommended Arb Burst: 6 00:19:41.981 IEEE OUI Identifier: e4 d2 5c 00:19:41.981 Multi-path I/O 00:19:41.981 May have multiple subsystem ports: Yes 00:19:41.981 May have multiple controllers: Yes 00:19:41.981 Associated with SR-IOV VF: No 00:19:41.981 Max Data Transfer Size: 131072 00:19:41.981 Max Number of Namespaces: 32 00:19:41.981 Max Number of I/O Queues: 127 00:19:41.981 NVMe Specification Version (VS): 1.3 00:19:41.981 NVMe Specification Version (Identify): 1.3 00:19:41.981 Maximum Queue Entries: 128 00:19:41.981 Contiguous Queues Required: Yes 00:19:41.981 Arbitration Mechanisms Supported 00:19:41.981 Weighted Round Robin: Not Supported 00:19:41.981 Vendor Specific: Not Supported 00:19:41.981 Reset Timeout: 15000 ms 00:19:41.981 Doorbell Stride: 4 bytes 00:19:41.981 NVM Subsystem Reset: Not Supported 00:19:41.981 Command Sets Supported 00:19:41.981 NVM Command Set: Supported 00:19:41.981 Boot Partition: Not Supported 00:19:41.981 Memory Page Size Minimum: 4096 bytes 00:19:41.981 Memory Page Size Maximum: 4096 bytes 00:19:41.981 Persistent Memory Region: Not Supported 00:19:41.982 Optional Asynchronous Events Supported 00:19:41.982 Namespace Attribute Notices: Supported 00:19:41.982 Firmware Activation Notices: Not Supported 00:19:41.982 ANA Change Notices: Not Supported 00:19:41.982 PLE Aggregate Log Change Notices: Not Supported 00:19:41.982 LBA Status Info Alert Notices: Not Supported 00:19:41.982 EGE Aggregate Log Change Notices: Not Supported 00:19:41.982 Normal NVM Subsystem Shutdown event: Not Supported 00:19:41.982 Zone Descriptor Change Notices: Not Supported 00:19:41.982 Discovery Log Change Notices: Not Supported 00:19:41.982 Controller Attributes 00:19:41.982 128-bit Host Identifier: Supported 00:19:41.982 Non-Operational Permissive Mode: Not Supported 00:19:41.982 NVM Sets: Not Supported 00:19:41.982 Read Recovery Levels: Not Supported 00:19:41.982 Endurance Groups: Not Supported 00:19:41.982 Predictable Latency Mode: Not Supported 00:19:41.982 Traffic Based Keep ALive: Not Supported 00:19:41.982 Namespace Granularity: Not Supported 00:19:41.982 SQ Associations: Not Supported 00:19:41.982 UUID List: Not Supported 00:19:41.982 Multi-Domain Subsystem: Not Supported 00:19:41.982 Fixed Capacity Management: Not Supported 00:19:41.982 Variable Capacity Management: Not Supported 00:19:41.982 Delete Endurance Group: Not Supported 00:19:41.982 Delete NVM Set: Not Supported 00:19:41.982 Extended LBA Formats Supported: Not Supported 00:19:41.982 Flexible Data Placement Supported: Not Supported 00:19:41.982 00:19:41.982 Controller Memory Buffer Support 00:19:41.982 ================================ 00:19:41.982 Supported: No 00:19:41.982 00:19:41.982 Persistent Memory Region Support 00:19:41.982 ================================ 00:19:41.982 Supported: No 00:19:41.982 00:19:41.982 Admin Command Set Attributes 00:19:41.982 ============================ 00:19:41.982 Security Send/Receive: Not Supported 00:19:41.982 Format NVM: Not Supported 00:19:41.982 Firmware Activate/Download: Not Supported 00:19:41.982 Namespace Management: Not Supported 00:19:41.982 Device Self-Test: Not Supported 00:19:41.982 Directives: Not Supported 00:19:41.982 NVMe-MI: Not Supported 00:19:41.982 Virtualization Management: Not Supported 00:19:41.982 Doorbell Buffer Config: Not Supported 00:19:41.982 Get LBA Status Capability: Not Supported 00:19:41.982 Command & Feature Lockdown Capability: Not Supported 00:19:41.982 Abort Command Limit: 4 00:19:41.982 Async Event Request Limit: 4 00:19:41.982 Number of Firmware Slots: N/A 00:19:41.982 Firmware Slot 1 Read-Only: N/A 00:19:41.982 Firmware Activation Without Reset: N/A 00:19:41.982 Multiple Update Detection Support: N/A 00:19:41.982 Firmware Update Granularity: No Information Provided 00:19:41.982 Per-Namespace SMART Log: No 00:19:41.982 Asymmetric Namespace Access Log Page: Not Supported 00:19:41.982 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:41.982 Command Effects Log Page: Supported 00:19:41.982 Get Log Page Extended Data: Supported 00:19:41.982 Telemetry Log Pages: Not Supported 00:19:41.982 Persistent Event Log Pages: Not Supported 00:19:41.982 Supported Log Pages Log Page: May Support 00:19:41.982 Commands Supported & Effects Log Page: Not Supported 00:19:41.982 Feature Identifiers & Effects Log Page:May Support 00:19:41.982 NVMe-MI Commands & Effects Log Page: May Support 00:19:41.982 Data Area 4 for Telemetry Log: Not Supported 00:19:41.982 Error Log Page Entries Supported: 128 00:19:41.982 Keep Alive: Supported 00:19:41.982 Keep Alive Granularity: 10000 ms 00:19:41.982 00:19:41.982 NVM Command Set Attributes 00:19:41.982 ========================== 00:19:41.982 Submission Queue Entry Size 00:19:41.982 Max: 64 00:19:41.982 Min: 64 00:19:41.982 Completion Queue Entry Size 00:19:41.982 Max: 16 00:19:41.982 Min: 16 00:19:41.982 Number of Namespaces: 32 00:19:41.982 Compare Command: Supported 00:19:41.982 Write Uncorrectable Command: Not Supported 00:19:41.982 Dataset Management Command: Supported 00:19:41.982 Write Zeroes Command: Supported 00:19:41.982 Set Features Save Field: Not Supported 00:19:41.982 Reservations: Supported 00:19:41.982 Timestamp: Not Supported 00:19:41.982 Copy: Supported 00:19:41.982 Volatile Write Cache: Present 00:19:41.982 Atomic Write Unit (Normal): 1 00:19:41.982 Atomic Write Unit (PFail): 1 00:19:41.982 Atomic Compare & Write Unit: 1 00:19:41.982 Fused Compare & Write: Supported 00:19:41.982 Scatter-Gather List 00:19:41.982 SGL Command Set: Supported 00:19:41.982 SGL Keyed: Supported 00:19:41.982 SGL Bit Bucket Descriptor: Not Supported 00:19:41.982 SGL Metadata Pointer: Not Supported 00:19:41.982 Oversized SGL: Not Supported 00:19:41.982 SGL Metadata Address: Not Supported 00:19:41.982 SGL Offset: Supported 00:19:41.982 Transport SGL Data Block: Not Supported 00:19:41.982 Replay Protected Memory Block: Not Supported 00:19:41.982 00:19:41.982 Firmware Slot Information 00:19:41.982 ========================= 00:19:41.982 Active slot: 1 00:19:41.982 Slot 1 Firmware Revision: 24.05 00:19:41.982 00:19:41.982 00:19:41.982 Commands Supported and Effects 00:19:41.982 ============================== 00:19:41.982 Admin Commands 00:19:41.982 -------------- 00:19:41.982 Get Log Page (02h): Supported 00:19:41.982 Identify (06h): Supported 00:19:41.982 Abort (08h): Supported 00:19:41.982 Set Features (09h): Supported 00:19:41.982 Get Features (0Ah): Supported 00:19:41.982 Asynchronous Event Request (0Ch): Supported 00:19:41.982 Keep Alive (18h): Supported 00:19:41.982 I/O Commands 00:19:41.982 ------------ 00:19:41.982 Flush (00h): Supported LBA-Change 00:19:41.982 Write (01h): Supported LBA-Change 00:19:41.982 Read (02h): Supported 00:19:41.982 Compare (05h): Supported 00:19:41.982 Write Zeroes (08h): Supported LBA-Change 00:19:41.982 Dataset Management (09h): Supported LBA-Change 00:19:41.982 Copy (19h): Supported LBA-Change 00:19:41.982 Unknown (79h): Supported LBA-Change 00:19:41.982 Unknown (7Ah): Supported 00:19:41.982 00:19:41.982 Error Log 00:19:41.982 ========= 00:19:41.982 00:19:41.982 Arbitration 00:19:41.982 =========== 00:19:41.982 Arbitration Burst: 1 00:19:41.982 00:19:41.982 Power Management 00:19:41.982 ================ 00:19:41.982 Number of Power States: 1 00:19:41.982 Current Power State: Power State #0 00:19:41.982 Power State #0: 00:19:41.982 Max Power: 0.00 W 00:19:41.982 Non-Operational State: Operational 00:19:41.982 Entry Latency: Not Reported 00:19:41.982 Exit Latency: Not Reported 00:19:41.982 Relative Read Throughput: 0 00:19:41.982 Relative Read Latency: 0 00:19:41.982 Relative Write Throughput: 0 00:19:41.982 Relative Write Latency: 0 00:19:41.982 Idle Power: Not Reported 00:19:41.982 Active Power: Not Reported 00:19:41.982 Non-Operational Permissive Mode: Not Supported 00:19:41.982 00:19:41.982 Health Information 00:19:41.982 ================== 00:19:41.982 Critical Warnings: 00:19:41.982 Available Spare Space: OK 00:19:41.982 Temperature: OK 00:19:41.982 Device Reliability: OK 00:19:41.982 Read Only: No 00:19:41.982 Volatile Memory Backup: OK 00:19:41.982 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:41.982 Temperature Threshold: [2024-05-15 11:43:12.633775] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x182e00 00:19:41.982 [2024-05-15 11:43:12.633784] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.982 [2024-05-15 11:43:12.633805] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.983 [2024-05-15 11:43:12.633811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.633817] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.633842] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:41.983 [2024-05-15 11:43:12.633851] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 61607 doesn't match qid 00:19:41.983 [2024-05-15 11:43:12.633865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32727 cdw0:5 sqhd:3a10 p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.633872] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 61607 doesn't match qid 00:19:41.983 [2024-05-15 11:43:12.633880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32727 cdw0:5 sqhd:3a10 p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.633887] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 61607 doesn't match qid 00:19:41.983 [2024-05-15 11:43:12.633895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32727 cdw0:5 sqhd:3a10 p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.633902] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 61607 doesn't match qid 00:19:41.983 [2024-05-15 11:43:12.633910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32727 cdw0:5 sqhd:3a10 p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.633921] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.633929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.983 [2024-05-15 11:43:12.633947] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.983 [2024-05-15 11:43:12.633953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.633962] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.633969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.983 [2024-05-15 11:43:12.633976] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.633988] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.983 [2024-05-15 11:43:12.633994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.634001] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:41.983 [2024-05-15 11:43:12.634007] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:41.983 [2024-05-15 11:43:12.634014] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634022] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.983 [2024-05-15 11:43:12.634052] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.983 [2024-05-15 11:43:12.634062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.634069] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634078] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.983 [2024-05-15 11:43:12.634104] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.983 [2024-05-15 11:43:12.634110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.634117] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634126] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.983 [2024-05-15 11:43:12.634151] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.983 [2024-05-15 11:43:12.634157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.634164] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634173] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.983 [2024-05-15 11:43:12.634197] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.983 [2024-05-15 11:43:12.634202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.634209] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634218] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.983 [2024-05-15 11:43:12.634244] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.983 [2024-05-15 11:43:12.634250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.634257] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634266] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.983 [2024-05-15 11:43:12.634296] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.983 [2024-05-15 11:43:12.634302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.634309] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634318] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.983 [2024-05-15 11:43:12.634342] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.983 [2024-05-15 11:43:12.634348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.634355] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634364] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.983 [2024-05-15 11:43:12.634393] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.983 [2024-05-15 11:43:12.634399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.634406] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634415] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.983 [2024-05-15 11:43:12.634445] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.983 [2024-05-15 11:43:12.634450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.634457] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634466] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.983 [2024-05-15 11:43:12.634494] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.983 [2024-05-15 11:43:12.634500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.634507] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634516] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.983 [2024-05-15 11:43:12.634541] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.983 [2024-05-15 11:43:12.634547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.634554] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634563] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.983 [2024-05-15 11:43:12.634590] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.983 [2024-05-15 11:43:12.634596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:19:41.983 [2024-05-15 11:43:12.634602] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634611] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.983 [2024-05-15 11:43:12.634619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.983 [2024-05-15 11:43:12.634637] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.634643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.634649] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.634658] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.634666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.634684] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.634690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.634696] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.634705] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.634713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.634729] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.634735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.634741] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.634750] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.634760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.634781] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.634787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.634793] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.634802] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.634810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.634828] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.634834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.634840] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.634849] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.634857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.634875] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.634881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.634887] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.634896] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.634904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.634920] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.634925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.634932] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.634941] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.634949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.634969] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.634974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.634981] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.634990] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.634998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.635019] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.635025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.635031] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635040] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.635069] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.635075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.635081] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635090] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.635118] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.635123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.635130] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635139] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.635165] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.635170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.635177] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635186] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.635208] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.635213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.635220] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635229] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.635252] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.635258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.635265] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635274] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.635298] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.635303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.635310] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635321] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.635351] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.635356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.635363] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635372] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.635396] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.635401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.635408] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635417] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.635442] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.635448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.635455] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635464] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.984 [2024-05-15 11:43:12.635493] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.984 [2024-05-15 11:43:12.635499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:19:41.984 [2024-05-15 11:43:12.635506] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635515] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.984 [2024-05-15 11:43:12.635523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.985 [2024-05-15 11:43:12.635540] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.985 [2024-05-15 11:43:12.635546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:19:41.985 [2024-05-15 11:43:12.635552] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635561] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.985 [2024-05-15 11:43:12.635591] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.985 [2024-05-15 11:43:12.635596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:41.985 [2024-05-15 11:43:12.635603] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635613] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.985 [2024-05-15 11:43:12.635639] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.985 [2024-05-15 11:43:12.635645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:41.985 [2024-05-15 11:43:12.635651] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635660] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.985 [2024-05-15 11:43:12.635684] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.985 [2024-05-15 11:43:12.635690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:41.985 [2024-05-15 11:43:12.635696] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635705] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.985 [2024-05-15 11:43:12.635727] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.985 [2024-05-15 11:43:12.635733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:41.985 [2024-05-15 11:43:12.635740] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635749] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.985 [2024-05-15 11:43:12.635774] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.985 [2024-05-15 11:43:12.635780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:41.985 [2024-05-15 11:43:12.635787] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635796] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.985 [2024-05-15 11:43:12.635822] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.985 [2024-05-15 11:43:12.635828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:41.985 [2024-05-15 11:43:12.635834] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635843] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.985 [2024-05-15 11:43:12.635872] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.985 [2024-05-15 11:43:12.635878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:41.985 [2024-05-15 11:43:12.635886] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635895] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.985 [2024-05-15 11:43:12.635925] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.985 [2024-05-15 11:43:12.635930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:41.985 [2024-05-15 11:43:12.635937] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635946] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.985 [2024-05-15 11:43:12.635971] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.985 [2024-05-15 11:43:12.635977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:41.985 [2024-05-15 11:43:12.635983] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.635992] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.636000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.985 [2024-05-15 11:43:12.636024] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.985 [2024-05-15 11:43:12.636029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:41.985 [2024-05-15 11:43:12.636036] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.636045] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.636053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.985 [2024-05-15 11:43:12.640069] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.985 [2024-05-15 11:43:12.640075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:41.985 [2024-05-15 11:43:12.640082] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.640091] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.640099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:41.985 [2024-05-15 11:43:12.640122] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:41.985 [2024-05-15 11:43:12.640128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0006 p:0 m:0 dnr:0 00:19:41.985 [2024-05-15 11:43:12.640134] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182e00 00:19:41.985 [2024-05-15 11:43:12.640142] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:19:41.985 0 Kelvin (-273 Celsius) 00:19:41.985 Available Spare: 0% 00:19:41.985 Available Spare Threshold: 0% 00:19:41.985 Life Percentage Used: 0% 00:19:41.985 Data Units Read: 0 00:19:41.985 Data Units Written: 0 00:19:41.985 Host Read Commands: 0 00:19:41.985 Host Write Commands: 0 00:19:41.985 Controller Busy Time: 0 minutes 00:19:41.985 Power Cycles: 0 00:19:41.985 Power On Hours: 0 hours 00:19:41.985 Unsafe Shutdowns: 0 00:19:41.985 Unrecoverable Media Errors: 0 00:19:41.985 Lifetime Error Log Entries: 0 00:19:41.985 Warning Temperature Time: 0 minutes 00:19:41.985 Critical Temperature Time: 0 minutes 00:19:41.985 00:19:41.985 Number of Queues 00:19:41.985 ================ 00:19:41.985 Number of I/O Submission Queues: 127 00:19:41.985 Number of I/O Completion Queues: 127 00:19:41.985 00:19:41.985 Active Namespaces 00:19:41.985 ================= 00:19:41.985 Namespace ID:1 00:19:41.985 Error Recovery Timeout: Unlimited 00:19:41.985 Command Set Identifier: NVM (00h) 00:19:41.985 Deallocate: Supported 00:19:41.985 Deallocated/Unwritten Error: Not Supported 00:19:41.985 Deallocated Read Value: Unknown 00:19:41.985 Deallocate in Write Zeroes: Not Supported 00:19:41.985 Deallocated Guard Field: 0xFFFF 00:19:41.985 Flush: Supported 00:19:41.985 Reservation: Supported 00:19:41.985 Namespace Sharing Capabilities: Multiple Controllers 00:19:41.985 Size (in LBAs): 131072 (0GiB) 00:19:41.985 Capacity (in LBAs): 131072 (0GiB) 00:19:41.985 Utilization (in LBAs): 131072 (0GiB) 00:19:41.985 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:41.985 EUI64: ABCDEF0123456789 00:19:41.985 UUID: 722afa01-3555-47c9-9b5d-a7554e2ac772 00:19:41.985 Thin Provisioning: Not Supported 00:19:41.985 Per-NS Atomic Units: Yes 00:19:41.985 Atomic Boundary Size (Normal): 0 00:19:41.985 Atomic Boundary Size (PFail): 0 00:19:41.985 Atomic Boundary Offset: 0 00:19:41.985 Maximum Single Source Range Length: 65535 00:19:41.985 Maximum Copy Length: 65535 00:19:41.985 Maximum Source Range Count: 1 00:19:41.985 NGUID/EUI64 Never Reused: No 00:19:41.985 Namespace Write Protected: No 00:19:41.985 Number of LBA Formats: 1 00:19:41.985 Current LBA Format: LBA Format #00 00:19:41.985 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:41.985 00:19:41.985 11:43:12 -- host/identify.sh@51 -- # sync 00:19:41.985 11:43:12 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:41.985 11:43:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.985 11:43:12 -- common/autotest_common.sh@10 -- # set +x 00:19:41.985 11:43:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.985 11:43:12 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:41.985 11:43:12 -- host/identify.sh@56 -- # nvmftestfini 00:19:41.985 11:43:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:41.985 11:43:12 -- nvmf/common.sh@117 -- # sync 00:19:41.985 11:43:12 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:41.986 11:43:12 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:41.986 11:43:12 -- nvmf/common.sh@120 -- # set +e 00:19:41.986 11:43:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:41.986 11:43:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:41.986 rmmod nvme_rdma 00:19:41.986 rmmod nvme_fabrics 00:19:42.246 11:43:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:42.246 11:43:12 -- nvmf/common.sh@124 -- # set -e 00:19:42.246 11:43:12 -- nvmf/common.sh@125 -- # return 0 00:19:42.246 11:43:12 -- nvmf/common.sh@478 -- # '[' -n 3076298 ']' 00:19:42.246 11:43:12 -- nvmf/common.sh@479 -- # killprocess 3076298 00:19:42.246 11:43:12 -- common/autotest_common.sh@946 -- # '[' -z 3076298 ']' 00:19:42.246 11:43:12 -- common/autotest_common.sh@950 -- # kill -0 3076298 00:19:42.246 11:43:12 -- common/autotest_common.sh@951 -- # uname 00:19:42.246 11:43:12 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:42.246 11:43:12 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3076298 00:19:42.246 11:43:12 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:42.246 11:43:12 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:42.246 11:43:12 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3076298' 00:19:42.246 killing process with pid 3076298 00:19:42.246 11:43:12 -- common/autotest_common.sh@965 -- # kill 3076298 00:19:42.246 [2024-05-15 11:43:12.811699] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:42.246 11:43:12 -- common/autotest_common.sh@970 -- # wait 3076298 00:19:42.246 [2024-05-15 11:43:12.902614] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:19:42.505 11:43:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:42.505 11:43:13 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:42.505 00:19:42.505 real 0m8.037s 00:19:42.505 user 0m8.447s 00:19:42.505 sys 0m5.069s 00:19:42.505 11:43:13 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:42.505 11:43:13 -- common/autotest_common.sh@10 -- # set +x 00:19:42.505 ************************************ 00:19:42.505 END TEST nvmf_identify 00:19:42.505 ************************************ 00:19:42.505 11:43:13 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:19:42.505 11:43:13 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:42.505 11:43:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:42.505 11:43:13 -- common/autotest_common.sh@10 -- # set +x 00:19:42.505 ************************************ 00:19:42.505 START TEST nvmf_perf 00:19:42.505 ************************************ 00:19:42.506 11:43:13 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:19:42.764 * Looking for test storage... 00:19:42.764 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:42.764 11:43:13 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:42.764 11:43:13 -- nvmf/common.sh@7 -- # uname -s 00:19:42.764 11:43:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.764 11:43:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.764 11:43:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.764 11:43:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.764 11:43:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.764 11:43:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.764 11:43:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.764 11:43:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.764 11:43:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.764 11:43:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.764 11:43:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:19:42.764 11:43:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:19:42.764 11:43:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.764 11:43:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.764 11:43:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:42.764 11:43:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.764 11:43:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:42.764 11:43:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.764 11:43:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.764 11:43:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.764 11:43:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.764 11:43:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.764 11:43:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.764 11:43:13 -- paths/export.sh@5 -- # export PATH 00:19:42.764 11:43:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.764 11:43:13 -- nvmf/common.sh@47 -- # : 0 00:19:42.764 11:43:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:42.764 11:43:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:42.764 11:43:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.764 11:43:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.764 11:43:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.764 11:43:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:42.764 11:43:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:42.764 11:43:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:42.764 11:43:13 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:42.764 11:43:13 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:42.764 11:43:13 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:42.764 11:43:13 -- host/perf.sh@17 -- # nvmftestinit 00:19:42.764 11:43:13 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:19:42.764 11:43:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.764 11:43:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:42.764 11:43:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:42.764 11:43:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:42.764 11:43:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.764 11:43:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.764 11:43:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.764 11:43:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:42.764 11:43:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:42.764 11:43:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:42.764 11:43:13 -- common/autotest_common.sh@10 -- # set +x 00:19:48.038 11:43:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:48.038 11:43:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:48.038 11:43:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:48.038 11:43:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:48.038 11:43:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:48.038 11:43:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:48.038 11:43:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:48.038 11:43:18 -- nvmf/common.sh@295 -- # net_devs=() 00:19:48.038 11:43:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:48.038 11:43:18 -- nvmf/common.sh@296 -- # e810=() 00:19:48.038 11:43:18 -- nvmf/common.sh@296 -- # local -ga e810 00:19:48.038 11:43:18 -- nvmf/common.sh@297 -- # x722=() 00:19:48.038 11:43:18 -- nvmf/common.sh@297 -- # local -ga x722 00:19:48.038 11:43:18 -- nvmf/common.sh@298 -- # mlx=() 00:19:48.038 11:43:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:48.038 11:43:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.038 11:43:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.038 11:43:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.038 11:43:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.038 11:43:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.038 11:43:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.038 11:43:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.038 11:43:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.038 11:43:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.038 11:43:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.038 11:43:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.038 11:43:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:48.038 11:43:18 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:48.038 11:43:18 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:48.038 11:43:18 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:48.038 11:43:18 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:48.038 11:43:18 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:48.038 11:43:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:48.038 11:43:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:48.038 11:43:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:48.038 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:48.038 11:43:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:48.038 11:43:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:48.038 11:43:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:48.038 11:43:18 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:48.038 11:43:18 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:48.038 11:43:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:48.038 11:43:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:48.038 11:43:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:48.038 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:48.038 11:43:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:48.038 11:43:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:48.038 11:43:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:48.038 11:43:18 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:48.038 11:43:18 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:48.039 11:43:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:48.039 11:43:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:48.039 11:43:18 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:48.039 11:43:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:48.039 11:43:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.039 11:43:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:48.039 11:43:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.039 11:43:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:48.039 Found net devices under 0000:18:00.0: mlx_0_0 00:19:48.039 11:43:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.039 11:43:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:48.039 11:43:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.039 11:43:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:48.039 11:43:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.039 11:43:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:48.039 Found net devices under 0000:18:00.1: mlx_0_1 00:19:48.039 11:43:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.039 11:43:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:48.039 11:43:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:48.039 11:43:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:48.039 11:43:18 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:48.039 11:43:18 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:48.039 11:43:18 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:48.039 11:43:18 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:48.039 11:43:18 -- nvmf/common.sh@58 -- # uname 00:19:48.039 11:43:18 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:48.039 11:43:18 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:48.039 11:43:18 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:48.039 11:43:18 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:48.039 11:43:18 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:48.039 11:43:18 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:48.039 11:43:18 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:48.039 11:43:18 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:48.039 11:43:18 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:48.039 11:43:18 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:48.039 11:43:18 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:48.039 11:43:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:48.039 11:43:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:48.039 11:43:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:48.039 11:43:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:48.039 11:43:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:48.039 11:43:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:48.039 11:43:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.039 11:43:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:48.039 11:43:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:48.039 11:43:18 -- nvmf/common.sh@105 -- # continue 2 00:19:48.039 11:43:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:48.039 11:43:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.039 11:43:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:48.039 11:43:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.039 11:43:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:48.039 11:43:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:48.039 11:43:18 -- nvmf/common.sh@105 -- # continue 2 00:19:48.039 11:43:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:48.039 11:43:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:48.039 11:43:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:48.039 11:43:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:48.039 11:43:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:48.039 11:43:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:48.039 11:43:18 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:48.039 11:43:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:48.039 11:43:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:48.039 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:48.039 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:19:48.039 altname enp24s0f0np0 00:19:48.039 altname ens785f0np0 00:19:48.039 inet 192.168.100.8/24 scope global mlx_0_0 00:19:48.039 valid_lft forever preferred_lft forever 00:19:48.039 11:43:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:48.039 11:43:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:48.039 11:43:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:48.039 11:43:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:48.039 11:43:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:48.039 11:43:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:48.039 11:43:18 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:48.039 11:43:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:48.039 11:43:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:48.039 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:48.039 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:19:48.039 altname enp24s0f1np1 00:19:48.039 altname ens785f1np1 00:19:48.039 inet 192.168.100.9/24 scope global mlx_0_1 00:19:48.039 valid_lft forever preferred_lft forever 00:19:48.039 11:43:18 -- nvmf/common.sh@411 -- # return 0 00:19:48.039 11:43:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:48.039 11:43:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:48.039 11:43:18 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:48.039 11:43:18 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:48.299 11:43:18 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:48.299 11:43:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:48.299 11:43:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:48.299 11:43:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:48.299 11:43:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:48.299 11:43:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:48.299 11:43:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:48.299 11:43:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.299 11:43:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:48.299 11:43:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:48.299 11:43:18 -- nvmf/common.sh@105 -- # continue 2 00:19:48.299 11:43:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:48.299 11:43:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.299 11:43:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:48.299 11:43:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.299 11:43:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:48.299 11:43:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:48.299 11:43:18 -- nvmf/common.sh@105 -- # continue 2 00:19:48.299 11:43:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:48.299 11:43:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:48.299 11:43:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:48.299 11:43:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:48.299 11:43:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:48.299 11:43:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:48.299 11:43:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:48.299 11:43:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:48.299 11:43:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:48.299 11:43:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:48.299 11:43:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:48.299 11:43:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:48.299 11:43:18 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:48.299 192.168.100.9' 00:19:48.299 11:43:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:48.299 192.168.100.9' 00:19:48.299 11:43:18 -- nvmf/common.sh@446 -- # head -n 1 00:19:48.299 11:43:18 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:48.300 11:43:18 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:48.300 192.168.100.9' 00:19:48.300 11:43:18 -- nvmf/common.sh@447 -- # tail -n +2 00:19:48.300 11:43:18 -- nvmf/common.sh@447 -- # head -n 1 00:19:48.300 11:43:18 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:48.300 11:43:18 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:48.300 11:43:18 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:48.300 11:43:18 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:48.300 11:43:18 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:48.300 11:43:18 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:48.300 11:43:18 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:48.300 11:43:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:48.300 11:43:18 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:48.300 11:43:18 -- common/autotest_common.sh@10 -- # set +x 00:19:48.300 11:43:18 -- nvmf/common.sh@470 -- # nvmfpid=3079396 00:19:48.300 11:43:18 -- nvmf/common.sh@471 -- # waitforlisten 3079396 00:19:48.300 11:43:18 -- common/autotest_common.sh@827 -- # '[' -z 3079396 ']' 00:19:48.300 11:43:18 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.300 11:43:18 -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:48.300 11:43:18 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.300 11:43:18 -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:48.300 11:43:18 -- common/autotest_common.sh@10 -- # set +x 00:19:48.300 11:43:18 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:48.300 [2024-05-15 11:43:18.935952] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:19:48.300 [2024-05-15 11:43:18.936006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.300 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.300 [2024-05-15 11:43:19.006936] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:48.559 [2024-05-15 11:43:19.097989] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.559 [2024-05-15 11:43:19.098028] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.559 [2024-05-15 11:43:19.098037] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.559 [2024-05-15 11:43:19.098064] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.559 [2024-05-15 11:43:19.098072] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.559 [2024-05-15 11:43:19.098125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.559 [2024-05-15 11:43:19.098208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.559 [2024-05-15 11:43:19.098287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.559 [2024-05-15 11:43:19.098289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.126 11:43:19 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:49.126 11:43:19 -- common/autotest_common.sh@860 -- # return 0 00:19:49.126 11:43:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:49.126 11:43:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:49.126 11:43:19 -- common/autotest_common.sh@10 -- # set +x 00:19:49.126 11:43:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.126 11:43:19 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:19:49.126 11:43:19 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:19:52.416 11:43:22 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:19:52.416 11:43:22 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:52.416 11:43:23 -- host/perf.sh@30 -- # local_nvme_trid=0000:5f:00.0 00:19:52.416 11:43:23 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:52.674 11:43:23 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:52.674 11:43:23 -- host/perf.sh@33 -- # '[' -n 0000:5f:00.0 ']' 00:19:52.674 11:43:23 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:52.674 11:43:23 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:19:52.674 11:43:23 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:19:52.674 [2024-05-15 11:43:23.414292] rdma.c:2712:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:19:52.674 [2024-05-15 11:43:23.435756] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x146b140/0x1498f80) succeed. 00:19:52.932 [2024-05-15 11:43:23.446773] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x146c780/0x14f8f80) succeed. 00:19:52.932 11:43:23 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:53.190 11:43:23 -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:53.190 11:43:23 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:53.190 11:43:23 -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:53.190 11:43:23 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:53.449 11:43:24 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:53.707 [2024-05-15 11:43:24.290693] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:53.707 [2024-05-15 11:43:24.291098] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:53.707 11:43:24 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:53.967 11:43:24 -- host/perf.sh@52 -- # '[' -n 0000:5f:00.0 ']' 00:19:53.967 11:43:24 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:19:53.967 11:43:24 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:53.968 11:43:24 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:19:55.424 Initializing NVMe Controllers 00:19:55.424 Attached to NVMe Controller at 0000:5f:00.0 [8086:0a54] 00:19:55.424 Associating PCIE (0000:5f:00.0) NSID 1 with lcore 0 00:19:55.424 Initialization complete. Launching workers. 00:19:55.424 ======================================================== 00:19:55.424 Latency(us) 00:19:55.424 Device Information : IOPS MiB/s Average min max 00:19:55.424 PCIE (0000:5f:00.0) NSID 1 from core 0: 99125.84 387.21 322.51 39.67 7211.76 00:19:55.424 ======================================================== 00:19:55.424 Total : 99125.84 387.21 322.51 39.67 7211.76 00:19:55.424 00:19:55.424 11:43:25 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:19:55.424 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.717 Initializing NVMe Controllers 00:19:58.717 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:58.717 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:58.717 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:58.717 Initialization complete. Launching workers. 00:19:58.717 ======================================================== 00:19:58.717 Latency(us) 00:19:58.717 Device Information : IOPS MiB/s Average min max 00:19:58.717 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6868.00 26.83 145.40 47.82 5007.36 00:19:58.717 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5251.00 20.51 190.24 68.28 5046.43 00:19:58.717 ======================================================== 00:19:58.717 Total : 12119.00 47.34 164.83 47.82 5046.43 00:19:58.717 00:19:58.717 11:43:29 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:19:58.717 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.010 Initializing NVMe Controllers 00:20:02.010 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:02.010 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:02.010 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:02.010 Initialization complete. Launching workers. 00:20:02.010 ======================================================== 00:20:02.010 Latency(us) 00:20:02.010 Device Information : IOPS MiB/s Average min max 00:20:02.010 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18370.98 71.76 1741.95 486.73 6040.46 00:20:02.010 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7979.12 7717.72 8213.25 00:20:02.010 ======================================================== 00:20:02.010 Total : 22402.98 87.51 2864.49 486.73 8213.25 00:20:02.010 00:20:02.010 11:43:32 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:20:02.010 11:43:32 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:02.010 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.207 Initializing NVMe Controllers 00:20:06.207 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:06.207 Controller IO queue size 128, less than required. 00:20:06.207 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:06.207 Controller IO queue size 128, less than required. 00:20:06.207 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:06.207 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:06.207 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:06.207 Initialization complete. Launching workers. 00:20:06.207 ======================================================== 00:20:06.207 Latency(us) 00:20:06.207 Device Information : IOPS MiB/s Average min max 00:20:06.207 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3376.60 844.15 37972.37 15742.79 96892.15 00:20:06.207 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3546.55 886.64 35755.81 15331.95 63440.32 00:20:06.207 ======================================================== 00:20:06.207 Total : 6923.15 1730.79 36836.88 15331.95 96892.15 00:20:06.207 00:20:06.207 11:43:36 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:20:06.207 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.466 No valid NVMe controllers or AIO or URING devices found 00:20:06.725 Initializing NVMe Controllers 00:20:06.725 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:06.725 Controller IO queue size 128, less than required. 00:20:06.725 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:06.725 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:06.725 Controller IO queue size 128, less than required. 00:20:06.725 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:06.725 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:06.725 WARNING: Some requested NVMe devices were skipped 00:20:06.725 11:43:37 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:20:06.725 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.923 Initializing NVMe Controllers 00:20:10.923 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.923 Controller IO queue size 128, less than required. 00:20:10.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:10.923 Controller IO queue size 128, less than required. 00:20:10.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:10.923 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:10.923 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:10.923 Initialization complete. Launching workers. 00:20:10.923 00:20:10.923 ==================== 00:20:10.923 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:10.923 RDMA transport: 00:20:10.923 dev name: mlx5_0 00:20:10.923 polls: 396895 00:20:10.923 idle_polls: 393636 00:20:10.923 completions: 42430 00:20:10.923 queued_requests: 1 00:20:10.923 total_send_wrs: 21215 00:20:10.923 send_doorbell_updates: 3005 00:20:10.923 total_recv_wrs: 21342 00:20:10.923 recv_doorbell_updates: 3008 00:20:10.923 --------------------------------- 00:20:10.923 00:20:10.923 ==================== 00:20:10.923 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:10.923 RDMA transport: 00:20:10.923 dev name: mlx5_0 00:20:10.923 polls: 395538 00:20:10.923 idle_polls: 395267 00:20:10.923 completions: 19898 00:20:10.923 queued_requests: 1 00:20:10.923 total_send_wrs: 9949 00:20:10.923 send_doorbell_updates: 251 00:20:10.923 total_recv_wrs: 10076 00:20:10.923 recv_doorbell_updates: 257 00:20:10.923 --------------------------------- 00:20:10.923 ======================================================== 00:20:10.923 Latency(us) 00:20:10.924 Device Information : IOPS MiB/s Average min max 00:20:10.924 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5297.19 1324.30 24144.04 11610.48 67069.95 00:20:10.924 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2484.04 621.01 51474.65 30324.38 78023.68 00:20:10.924 ======================================================== 00:20:10.924 Total : 7781.24 1945.31 32868.93 11610.48 78023.68 00:20:10.924 00:20:10.924 11:43:41 -- host/perf.sh@66 -- # sync 00:20:10.924 11:43:41 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.183 11:43:41 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:11.183 11:43:41 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:11.183 11:43:41 -- host/perf.sh@114 -- # nvmftestfini 00:20:11.183 11:43:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:11.183 11:43:41 -- nvmf/common.sh@117 -- # sync 00:20:11.183 11:43:41 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:11.183 11:43:41 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:11.183 11:43:41 -- nvmf/common.sh@120 -- # set +e 00:20:11.183 11:43:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:11.183 11:43:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:11.183 rmmod nvme_rdma 00:20:11.183 rmmod nvme_fabrics 00:20:11.183 11:43:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:11.183 11:43:41 -- nvmf/common.sh@124 -- # set -e 00:20:11.183 11:43:41 -- nvmf/common.sh@125 -- # return 0 00:20:11.183 11:43:41 -- nvmf/common.sh@478 -- # '[' -n 3079396 ']' 00:20:11.183 11:43:41 -- nvmf/common.sh@479 -- # killprocess 3079396 00:20:11.183 11:43:41 -- common/autotest_common.sh@946 -- # '[' -z 3079396 ']' 00:20:11.183 11:43:41 -- common/autotest_common.sh@950 -- # kill -0 3079396 00:20:11.183 11:43:41 -- common/autotest_common.sh@951 -- # uname 00:20:11.183 11:43:41 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:11.183 11:43:41 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3079396 00:20:11.183 11:43:41 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:11.183 11:43:41 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:11.183 11:43:41 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3079396' 00:20:11.183 killing process with pid 3079396 00:20:11.183 11:43:41 -- common/autotest_common.sh@965 -- # kill 3079396 00:20:11.183 [2024-05-15 11:43:41.929962] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:11.183 11:43:41 -- common/autotest_common.sh@970 -- # wait 3079396 00:20:11.443 [2024-05-15 11:43:41.984734] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:20:19.564 11:43:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:19.564 11:43:49 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:20:19.564 00:20:19.564 real 0m35.856s 00:20:19.564 user 2m1.701s 00:20:19.564 sys 0m5.516s 00:20:19.564 11:43:49 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:19.564 11:43:49 -- common/autotest_common.sh@10 -- # set +x 00:20:19.564 ************************************ 00:20:19.564 END TEST nvmf_perf 00:20:19.564 ************************************ 00:20:19.564 11:43:49 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:20:19.564 11:43:49 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:19.564 11:43:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:19.564 11:43:49 -- common/autotest_common.sh@10 -- # set +x 00:20:19.564 ************************************ 00:20:19.564 START TEST nvmf_fio_host 00:20:19.564 ************************************ 00:20:19.564 11:43:49 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:20:19.564 * Looking for test storage... 00:20:19.564 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:19.564 11:43:49 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:19.564 11:43:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.564 11:43:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.564 11:43:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.564 11:43:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.564 11:43:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.564 11:43:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.564 11:43:49 -- paths/export.sh@5 -- # export PATH 00:20:19.564 11:43:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.564 11:43:49 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:19.564 11:43:49 -- nvmf/common.sh@7 -- # uname -s 00:20:19.564 11:43:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.564 11:43:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.564 11:43:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.564 11:43:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.564 11:43:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.564 11:43:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.564 11:43:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.564 11:43:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.564 11:43:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.564 11:43:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.564 11:43:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:20:19.564 11:43:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:20:19.565 11:43:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.565 11:43:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.565 11:43:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:19.565 11:43:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.565 11:43:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:19.565 11:43:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.565 11:43:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.565 11:43:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.565 11:43:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.565 11:43:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.565 11:43:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.565 11:43:49 -- paths/export.sh@5 -- # export PATH 00:20:19.565 11:43:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.565 11:43:49 -- nvmf/common.sh@47 -- # : 0 00:20:19.565 11:43:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:19.565 11:43:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:19.565 11:43:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.565 11:43:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.565 11:43:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.565 11:43:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:19.565 11:43:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:19.565 11:43:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:19.565 11:43:49 -- host/fio.sh@12 -- # nvmftestinit 00:20:19.565 11:43:49 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:20:19.565 11:43:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.565 11:43:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:19.565 11:43:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:19.565 11:43:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:19.565 11:43:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.565 11:43:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:19.565 11:43:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.565 11:43:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:19.565 11:43:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:19.565 11:43:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:19.565 11:43:49 -- common/autotest_common.sh@10 -- # set +x 00:20:24.840 11:43:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:24.840 11:43:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:24.840 11:43:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:24.840 11:43:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:24.840 11:43:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:24.840 11:43:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:24.840 11:43:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:24.840 11:43:55 -- nvmf/common.sh@295 -- # net_devs=() 00:20:24.840 11:43:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:24.840 11:43:55 -- nvmf/common.sh@296 -- # e810=() 00:20:24.840 11:43:55 -- nvmf/common.sh@296 -- # local -ga e810 00:20:24.840 11:43:55 -- nvmf/common.sh@297 -- # x722=() 00:20:24.840 11:43:55 -- nvmf/common.sh@297 -- # local -ga x722 00:20:24.840 11:43:55 -- nvmf/common.sh@298 -- # mlx=() 00:20:24.840 11:43:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:24.840 11:43:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.840 11:43:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.840 11:43:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.840 11:43:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.840 11:43:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.841 11:43:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.841 11:43:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.841 11:43:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.841 11:43:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.841 11:43:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.841 11:43:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.841 11:43:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:24.841 11:43:55 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:24.841 11:43:55 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:24.841 11:43:55 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:24.841 11:43:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:24.841 11:43:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.841 11:43:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:24.841 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:24.841 11:43:55 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:24.841 11:43:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.841 11:43:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:24.841 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:24.841 11:43:55 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:24.841 11:43:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:24.841 11:43:55 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.841 11:43:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.841 11:43:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:24.841 11:43:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.841 11:43:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:24.841 Found net devices under 0000:18:00.0: mlx_0_0 00:20:24.841 11:43:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.841 11:43:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.841 11:43:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.841 11:43:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:24.841 11:43:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.841 11:43:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:24.841 Found net devices under 0000:18:00.1: mlx_0_1 00:20:24.841 11:43:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.841 11:43:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:24.841 11:43:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:24.841 11:43:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@409 -- # rdma_device_init 00:20:24.841 11:43:55 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:20:24.841 11:43:55 -- nvmf/common.sh@58 -- # uname 00:20:24.841 11:43:55 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:24.841 11:43:55 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:24.841 11:43:55 -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:24.841 11:43:55 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:24.841 11:43:55 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:24.841 11:43:55 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:24.841 11:43:55 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:24.841 11:43:55 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:24.841 11:43:55 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:20:24.841 11:43:55 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:24.841 11:43:55 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:24.841 11:43:55 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:24.841 11:43:55 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:24.841 11:43:55 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:24.841 11:43:55 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:24.841 11:43:55 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:24.841 11:43:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:24.841 11:43:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.841 11:43:55 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:24.841 11:43:55 -- nvmf/common.sh@105 -- # continue 2 00:20:24.841 11:43:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:24.841 11:43:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.841 11:43:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.841 11:43:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:24.841 11:43:55 -- nvmf/common.sh@105 -- # continue 2 00:20:24.841 11:43:55 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:24.841 11:43:55 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:24.841 11:43:55 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:24.841 11:43:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:24.841 11:43:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:24.841 11:43:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:24.841 11:43:55 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:24.841 11:43:55 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:24.841 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:24.841 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:20:24.841 altname enp24s0f0np0 00:20:24.841 altname ens785f0np0 00:20:24.841 inet 192.168.100.8/24 scope global mlx_0_0 00:20:24.841 valid_lft forever preferred_lft forever 00:20:24.841 11:43:55 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:24.841 11:43:55 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:24.841 11:43:55 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:24.841 11:43:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:24.841 11:43:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:24.841 11:43:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:24.841 11:43:55 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:24.841 11:43:55 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:24.841 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:24.841 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:20:24.841 altname enp24s0f1np1 00:20:24.841 altname ens785f1np1 00:20:24.841 inet 192.168.100.9/24 scope global mlx_0_1 00:20:24.841 valid_lft forever preferred_lft forever 00:20:24.841 11:43:55 -- nvmf/common.sh@411 -- # return 0 00:20:24.841 11:43:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:24.841 11:43:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:24.841 11:43:55 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:20:24.841 11:43:55 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:24.841 11:43:55 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:24.841 11:43:55 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:24.841 11:43:55 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:24.841 11:43:55 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:24.841 11:43:55 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:24.841 11:43:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:24.841 11:43:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.841 11:43:55 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:24.841 11:43:55 -- nvmf/common.sh@105 -- # continue 2 00:20:24.841 11:43:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:24.841 11:43:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.841 11:43:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.841 11:43:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:24.841 11:43:55 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:24.841 11:43:55 -- nvmf/common.sh@105 -- # continue 2 00:20:24.841 11:43:55 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:24.841 11:43:55 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:24.841 11:43:55 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:24.841 11:43:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:24.841 11:43:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:24.841 11:43:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:24.841 11:43:55 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:24.841 11:43:55 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:24.841 11:43:55 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:24.841 11:43:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:24.841 11:43:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:24.841 11:43:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:24.841 11:43:55 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:20:24.841 192.168.100.9' 00:20:24.841 11:43:55 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:24.841 192.168.100.9' 00:20:24.841 11:43:55 -- nvmf/common.sh@446 -- # head -n 1 00:20:24.841 11:43:55 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:24.841 11:43:55 -- nvmf/common.sh@447 -- # tail -n +2 00:20:24.841 11:43:55 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:20:24.841 192.168.100.9' 00:20:24.841 11:43:55 -- nvmf/common.sh@447 -- # head -n 1 00:20:24.841 11:43:55 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:24.841 11:43:55 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:20:24.841 11:43:55 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:24.841 11:43:55 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:20:24.841 11:43:55 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:20:24.841 11:43:55 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:20:24.841 11:43:55 -- host/fio.sh@14 -- # [[ y != y ]] 00:20:24.842 11:43:55 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:20:24.842 11:43:55 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:24.842 11:43:55 -- common/autotest_common.sh@10 -- # set +x 00:20:24.842 11:43:55 -- host/fio.sh@22 -- # nvmfpid=3086155 00:20:24.842 11:43:55 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:24.842 11:43:55 -- host/fio.sh@26 -- # waitforlisten 3086155 00:20:24.842 11:43:55 -- host/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:24.842 11:43:55 -- common/autotest_common.sh@827 -- # '[' -z 3086155 ']' 00:20:24.842 11:43:55 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.842 11:43:55 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:24.842 11:43:55 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.842 11:43:55 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:24.842 11:43:55 -- common/autotest_common.sh@10 -- # set +x 00:20:24.842 [2024-05-15 11:43:55.448989] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:20:24.842 [2024-05-15 11:43:55.449041] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.842 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.842 [2024-05-15 11:43:55.521789] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:25.101 [2024-05-15 11:43:55.612753] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.101 [2024-05-15 11:43:55.612792] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.101 [2024-05-15 11:43:55.612801] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.101 [2024-05-15 11:43:55.612826] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.101 [2024-05-15 11:43:55.612833] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.101 [2024-05-15 11:43:55.612886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.101 [2024-05-15 11:43:55.612969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.101 [2024-05-15 11:43:55.613071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:25.101 [2024-05-15 11:43:55.613073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.670 11:43:56 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:25.670 11:43:56 -- common/autotest_common.sh@860 -- # return 0 00:20:25.670 11:43:56 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:25.670 11:43:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.670 11:43:56 -- common/autotest_common.sh@10 -- # set +x 00:20:25.670 [2024-05-15 11:43:56.305831] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24fbf00/0x25003f0) succeed. 00:20:25.670 [2024-05-15 11:43:56.316467] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24fd540/0x2541a80) succeed. 00:20:25.929 11:43:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.929 11:43:56 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:20:25.929 11:43:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:25.929 11:43:56 -- common/autotest_common.sh@10 -- # set +x 00:20:25.929 11:43:56 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:25.929 11:43:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.929 11:43:56 -- common/autotest_common.sh@10 -- # set +x 00:20:25.929 Malloc1 00:20:25.929 11:43:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.929 11:43:56 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:25.929 11:43:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.929 11:43:56 -- common/autotest_common.sh@10 -- # set +x 00:20:25.929 11:43:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.929 11:43:56 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:25.929 11:43:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.929 11:43:56 -- common/autotest_common.sh@10 -- # set +x 00:20:25.929 11:43:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.929 11:43:56 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:25.929 11:43:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.929 11:43:56 -- common/autotest_common.sh@10 -- # set +x 00:20:25.929 [2024-05-15 11:43:56.542931] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:25.929 [2024-05-15 11:43:56.543297] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:25.929 11:43:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.929 11:43:56 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:25.929 11:43:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.929 11:43:56 -- common/autotest_common.sh@10 -- # set +x 00:20:25.929 11:43:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.929 11:43:56 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:20:25.929 11:43:56 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:20:25.929 11:43:56 -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:20:25.929 11:43:56 -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:20:25.929 11:43:56 -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:25.929 11:43:56 -- common/autotest_common.sh@1335 -- # local sanitizers 00:20:25.929 11:43:56 -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:25.929 11:43:56 -- common/autotest_common.sh@1337 -- # shift 00:20:25.929 11:43:56 -- common/autotest_common.sh@1339 -- # local asan_lib= 00:20:25.929 11:43:56 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.929 11:43:56 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:25.929 11:43:56 -- common/autotest_common.sh@1341 -- # grep libasan 00:20:25.929 11:43:56 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:25.929 11:43:56 -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:25.929 11:43:56 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:25.929 11:43:56 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.929 11:43:56 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:25.929 11:43:56 -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:20:25.929 11:43:56 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:25.929 11:43:56 -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:25.929 11:43:56 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:25.929 11:43:56 -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:25.929 11:43:56 -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:20:26.188 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:26.188 fio-3.35 00:20:26.188 Starting 1 thread 00:20:26.188 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.750 00:20:28.750 test: (groupid=0, jobs=1): err= 0: pid=3086448: Wed May 15 11:43:59 2024 00:20:28.750 read: IOPS=17.9k, BW=69.7MiB/s (73.1MB/s)(140MiB/2004msec) 00:20:28.750 slat (nsec): min=1389, max=26768, avg=1566.57, stdev=484.81 00:20:28.750 clat (usec): min=1616, max=6535, avg=3559.08, stdev=83.64 00:20:28.750 lat (usec): min=1631, max=6536, avg=3560.65, stdev=83.55 00:20:28.750 clat percentiles (usec): 00:20:28.750 | 1.00th=[ 3523], 5.00th=[ 3523], 10.00th=[ 3523], 20.00th=[ 3556], 00:20:28.750 | 30.00th=[ 3556], 40.00th=[ 3556], 50.00th=[ 3556], 60.00th=[ 3556], 00:20:28.750 | 70.00th=[ 3556], 80.00th=[ 3556], 90.00th=[ 3589], 95.00th=[ 3589], 00:20:28.750 | 99.00th=[ 3621], 99.50th=[ 3654], 99.90th=[ 5145], 99.95th=[ 5997], 00:20:28.750 | 99.99th=[ 6128] 00:20:28.750 bw ( KiB/s): min=69928, max=72184, per=100.00%, avg=71416.00, stdev=1015.86, samples=4 00:20:28.750 iops : min=17482, max=18046, avg=17854.00, stdev=253.97, samples=4 00:20:28.750 write: IOPS=17.8k, BW=69.7MiB/s (73.1MB/s)(140MiB/2004msec); 0 zone resets 00:20:28.750 slat (nsec): min=1447, max=20071, avg=1919.65, stdev=489.95 00:20:28.750 clat (usec): min=2382, max=6555, avg=3556.93, stdev=75.96 00:20:28.750 lat (usec): min=2393, max=6557, avg=3558.85, stdev=75.87 00:20:28.750 clat percentiles (usec): 00:20:28.750 | 1.00th=[ 3523], 5.00th=[ 3523], 10.00th=[ 3523], 20.00th=[ 3556], 00:20:28.750 | 30.00th=[ 3556], 40.00th=[ 3556], 50.00th=[ 3556], 60.00th=[ 3556], 00:20:28.750 | 70.00th=[ 3556], 80.00th=[ 3556], 90.00th=[ 3589], 95.00th=[ 3589], 00:20:28.750 | 99.00th=[ 3621], 99.50th=[ 3654], 99.90th=[ 4359], 99.95th=[ 5276], 00:20:28.750 | 99.99th=[ 6521] 00:20:28.750 bw ( KiB/s): min=69888, max=72168, per=100.00%, avg=71430.00, stdev=1042.61, samples=4 00:20:28.750 iops : min=17472, max=18042, avg=17857.50, stdev=260.65, samples=4 00:20:28.750 lat (msec) : 2=0.01%, 4=99.86%, 10=0.13% 00:20:28.750 cpu : usr=99.45%, sys=0.10%, ctx=15, majf=0, minf=3 00:20:28.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:28.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:28.750 issued rwts: total=35777,35773,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:28.750 00:20:28.750 Run status group 0 (all jobs): 00:20:28.750 READ: bw=69.7MiB/s (73.1MB/s), 69.7MiB/s-69.7MiB/s (73.1MB/s-73.1MB/s), io=140MiB (147MB), run=2004-2004msec 00:20:28.750 WRITE: bw=69.7MiB/s (73.1MB/s), 69.7MiB/s-69.7MiB/s (73.1MB/s-73.1MB/s), io=140MiB (147MB), run=2004-2004msec 00:20:28.750 11:43:59 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:28.750 11:43:59 -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:28.750 11:43:59 -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:20:28.750 11:43:59 -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:28.750 11:43:59 -- common/autotest_common.sh@1335 -- # local sanitizers 00:20:28.750 11:43:59 -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:28.750 11:43:59 -- common/autotest_common.sh@1337 -- # shift 00:20:28.750 11:43:59 -- common/autotest_common.sh@1339 -- # local asan_lib= 00:20:28.750 11:43:59 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:28.750 11:43:59 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:28.750 11:43:59 -- common/autotest_common.sh@1341 -- # grep libasan 00:20:28.750 11:43:59 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:28.750 11:43:59 -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:28.750 11:43:59 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:28.750 11:43:59 -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:28.750 11:43:59 -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:28.750 11:43:59 -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:20:28.750 11:43:59 -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:28.750 11:43:59 -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:28.750 11:43:59 -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:28.750 11:43:59 -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:28.750 11:43:59 -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:29.021 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:29.021 fio-3.35 00:20:29.021 Starting 1 thread 00:20:29.021 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.555 00:20:31.555 test: (groupid=0, jobs=1): err= 0: pid=3086899: Wed May 15 11:44:01 2024 00:20:31.555 read: IOPS=13.0k, BW=203MiB/s (213MB/s)(403MiB/1982msec) 00:20:31.555 slat (nsec): min=2309, max=45244, avg=2690.15, stdev=1105.15 00:20:31.555 clat (usec): min=291, max=9137, avg=1718.92, stdev=996.40 00:20:31.555 lat (usec): min=294, max=9156, avg=1721.61, stdev=996.91 00:20:31.555 clat percentiles (usec): 00:20:31.555 | 1.00th=[ 594], 5.00th=[ 848], 10.00th=[ 988], 20.00th=[ 1156], 00:20:31.555 | 30.00th=[ 1270], 40.00th=[ 1369], 50.00th=[ 1467], 60.00th=[ 1614], 00:20:31.555 | 70.00th=[ 1778], 80.00th=[ 2008], 90.00th=[ 2409], 95.00th=[ 3687], 00:20:31.555 | 99.00th=[ 5932], 99.50th=[ 6915], 99.90th=[ 8586], 99.95th=[ 8717], 00:20:31.555 | 99.99th=[ 9110] 00:20:31.555 bw ( KiB/s): min=99232, max=104416, per=49.20%, avg=102328.00, stdev=2272.28, samples=4 00:20:31.555 iops : min= 6202, max= 6526, avg=6395.50, stdev=142.02, samples=4 00:20:31.555 write: IOPS=7083, BW=111MiB/s (116MB/s)(208MiB/1882msec); 0 zone resets 00:20:31.555 slat (usec): min=27, max=131, avg=30.61, stdev= 6.65 00:20:31.555 clat (usec): min=4293, max=22196, avg=14261.01, stdev=1818.35 00:20:31.555 lat (usec): min=4320, max=22226, avg=14291.62, stdev=1818.14 00:20:31.555 clat percentiles (usec): 00:20:31.555 | 1.00th=[ 7177], 5.00th=[11863], 10.00th=[12387], 20.00th=[13042], 00:20:31.555 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14222], 60.00th=[14615], 00:20:31.555 | 70.00th=[15139], 80.00th=[15664], 90.00th=[16450], 95.00th=[16909], 00:20:31.555 | 99.00th=[18220], 99.50th=[19006], 99.90th=[21103], 99.95th=[21627], 00:20:31.555 | 99.99th=[22152] 00:20:31.555 bw ( KiB/s): min=101504, max=108448, per=93.33%, avg=105776.00, stdev=3229.57, samples=4 00:20:31.555 iops : min= 6344, max= 6778, avg=6611.00, stdev=201.85, samples=4 00:20:31.555 lat (usec) : 500=0.22%, 750=1.65%, 1000=5.00% 00:20:31.555 lat (msec) : 2=45.51%, 4=10.69%, 10=3.38%, 20=33.47%, 50=0.07% 00:20:31.555 cpu : usr=96.46%, sys=1.80%, ctx=182, majf=0, minf=2 00:20:31.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:20:31.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:31.555 issued rwts: total=25764,13331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:31.555 00:20:31.555 Run status group 0 (all jobs): 00:20:31.555 READ: bw=203MiB/s (213MB/s), 203MiB/s-203MiB/s (213MB/s-213MB/s), io=403MiB (422MB), run=1982-1982msec 00:20:31.555 WRITE: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=208MiB (218MB), run=1882-1882msec 00:20:31.555 11:44:01 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:31.555 11:44:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.555 11:44:01 -- common/autotest_common.sh@10 -- # set +x 00:20:31.555 11:44:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.555 11:44:01 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:20:31.555 11:44:01 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:20:31.555 11:44:01 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:20:31.555 11:44:01 -- host/fio.sh@84 -- # nvmftestfini 00:20:31.555 11:44:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:31.555 11:44:01 -- nvmf/common.sh@117 -- # sync 00:20:31.555 11:44:01 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:31.555 11:44:01 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:31.555 11:44:01 -- nvmf/common.sh@120 -- # set +e 00:20:31.555 11:44:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:31.555 11:44:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:31.555 rmmod nvme_rdma 00:20:31.555 rmmod nvme_fabrics 00:20:31.555 11:44:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:31.555 11:44:01 -- nvmf/common.sh@124 -- # set -e 00:20:31.555 11:44:01 -- nvmf/common.sh@125 -- # return 0 00:20:31.555 11:44:01 -- nvmf/common.sh@478 -- # '[' -n 3086155 ']' 00:20:31.555 11:44:01 -- nvmf/common.sh@479 -- # killprocess 3086155 00:20:31.555 11:44:01 -- common/autotest_common.sh@946 -- # '[' -z 3086155 ']' 00:20:31.555 11:44:01 -- common/autotest_common.sh@950 -- # kill -0 3086155 00:20:31.555 11:44:01 -- common/autotest_common.sh@951 -- # uname 00:20:31.555 11:44:01 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:31.555 11:44:01 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3086155 00:20:31.555 11:44:02 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:31.555 11:44:02 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:31.555 11:44:02 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3086155' 00:20:31.555 killing process with pid 3086155 00:20:31.555 11:44:02 -- common/autotest_common.sh@965 -- # kill 3086155 00:20:31.555 [2024-05-15 11:44:02.017307] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:31.555 11:44:02 -- common/autotest_common.sh@970 -- # wait 3086155 00:20:31.555 [2024-05-15 11:44:02.095626] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:20:31.814 11:44:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:31.814 11:44:02 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:20:31.814 00:20:31.814 real 0m13.138s 00:20:31.814 user 0m37.946s 00:20:31.814 sys 0m5.516s 00:20:31.814 11:44:02 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:31.814 11:44:02 -- common/autotest_common.sh@10 -- # set +x 00:20:31.814 ************************************ 00:20:31.814 END TEST nvmf_fio_host 00:20:31.814 ************************************ 00:20:31.814 11:44:02 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:20:31.814 11:44:02 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:31.815 11:44:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:31.815 11:44:02 -- common/autotest_common.sh@10 -- # set +x 00:20:31.815 ************************************ 00:20:31.815 START TEST nvmf_failover 00:20:31.815 ************************************ 00:20:31.815 11:44:02 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:20:31.815 * Looking for test storage... 00:20:31.815 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:31.815 11:44:02 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:31.815 11:44:02 -- nvmf/common.sh@7 -- # uname -s 00:20:31.815 11:44:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:31.815 11:44:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:31.815 11:44:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:31.815 11:44:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:31.815 11:44:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:31.815 11:44:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:31.815 11:44:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:31.815 11:44:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:31.815 11:44:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:31.815 11:44:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:31.815 11:44:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:20:31.815 11:44:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:20:31.815 11:44:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:31.815 11:44:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:31.815 11:44:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:31.815 11:44:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:31.815 11:44:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:31.815 11:44:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:31.815 11:44:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:31.815 11:44:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:31.815 11:44:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.815 11:44:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.815 11:44:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.815 11:44:02 -- paths/export.sh@5 -- # export PATH 00:20:31.815 11:44:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.815 11:44:02 -- nvmf/common.sh@47 -- # : 0 00:20:31.815 11:44:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:31.815 11:44:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:31.815 11:44:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:31.815 11:44:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:31.815 11:44:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:31.815 11:44:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:31.815 11:44:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:31.815 11:44:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:31.815 11:44:02 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:31.815 11:44:02 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:31.815 11:44:02 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:31.815 11:44:02 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:31.815 11:44:02 -- host/failover.sh@18 -- # nvmftestinit 00:20:31.815 11:44:02 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:20:31.815 11:44:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:31.815 11:44:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:31.815 11:44:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:31.815 11:44:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:31.815 11:44:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.815 11:44:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:31.815 11:44:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.815 11:44:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:31.815 11:44:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:31.815 11:44:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:31.815 11:44:02 -- common/autotest_common.sh@10 -- # set +x 00:20:38.442 11:44:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:38.442 11:44:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:38.442 11:44:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:38.442 11:44:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:38.442 11:44:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:38.442 11:44:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:38.442 11:44:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:38.442 11:44:08 -- nvmf/common.sh@295 -- # net_devs=() 00:20:38.442 11:44:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:38.442 11:44:08 -- nvmf/common.sh@296 -- # e810=() 00:20:38.442 11:44:08 -- nvmf/common.sh@296 -- # local -ga e810 00:20:38.442 11:44:08 -- nvmf/common.sh@297 -- # x722=() 00:20:38.442 11:44:08 -- nvmf/common.sh@297 -- # local -ga x722 00:20:38.442 11:44:08 -- nvmf/common.sh@298 -- # mlx=() 00:20:38.442 11:44:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:38.442 11:44:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.442 11:44:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.442 11:44:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.442 11:44:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.442 11:44:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.442 11:44:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.442 11:44:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.442 11:44:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.442 11:44:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.442 11:44:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.442 11:44:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.442 11:44:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:38.442 11:44:08 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:38.442 11:44:08 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:38.442 11:44:08 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:38.442 11:44:08 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:38.442 11:44:08 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:38.442 11:44:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:38.442 11:44:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.442 11:44:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:38.443 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:38.443 11:44:08 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:38.443 11:44:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.443 11:44:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:38.443 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:38.443 11:44:08 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:38.443 11:44:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:38.443 11:44:08 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.443 11:44:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.443 11:44:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:38.443 11:44:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.443 11:44:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:38.443 Found net devices under 0000:18:00.0: mlx_0_0 00:20:38.443 11:44:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.443 11:44:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.443 11:44:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.443 11:44:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:38.443 11:44:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.443 11:44:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:38.443 Found net devices under 0000:18:00.1: mlx_0_1 00:20:38.443 11:44:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.443 11:44:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:38.443 11:44:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:38.443 11:44:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@409 -- # rdma_device_init 00:20:38.443 11:44:08 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:20:38.443 11:44:08 -- nvmf/common.sh@58 -- # uname 00:20:38.443 11:44:08 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:38.443 11:44:08 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:38.443 11:44:08 -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:38.443 11:44:08 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:38.443 11:44:08 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:38.443 11:44:08 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:38.443 11:44:08 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:38.443 11:44:08 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:38.443 11:44:08 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:20:38.443 11:44:08 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:38.443 11:44:08 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:38.443 11:44:08 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:38.443 11:44:08 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:38.443 11:44:08 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:38.443 11:44:08 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:38.443 11:44:08 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:38.443 11:44:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:38.443 11:44:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:38.443 11:44:08 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:38.443 11:44:08 -- nvmf/common.sh@105 -- # continue 2 00:20:38.443 11:44:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:38.443 11:44:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:38.443 11:44:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:38.443 11:44:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:38.443 11:44:08 -- nvmf/common.sh@105 -- # continue 2 00:20:38.443 11:44:08 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:38.443 11:44:08 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:38.443 11:44:08 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:38.443 11:44:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:38.443 11:44:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:38.443 11:44:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:38.443 11:44:08 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:38.443 11:44:08 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:38.443 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:38.443 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:20:38.443 altname enp24s0f0np0 00:20:38.443 altname ens785f0np0 00:20:38.443 inet 192.168.100.8/24 scope global mlx_0_0 00:20:38.443 valid_lft forever preferred_lft forever 00:20:38.443 11:44:08 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:38.443 11:44:08 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:38.443 11:44:08 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:38.443 11:44:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:38.443 11:44:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:38.443 11:44:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:38.443 11:44:08 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:38.443 11:44:08 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:38.443 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:38.443 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:20:38.443 altname enp24s0f1np1 00:20:38.443 altname ens785f1np1 00:20:38.443 inet 192.168.100.9/24 scope global mlx_0_1 00:20:38.443 valid_lft forever preferred_lft forever 00:20:38.443 11:44:08 -- nvmf/common.sh@411 -- # return 0 00:20:38.443 11:44:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:38.443 11:44:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:38.443 11:44:08 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:20:38.443 11:44:08 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:38.443 11:44:08 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:38.443 11:44:08 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:38.443 11:44:08 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:38.443 11:44:08 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:38.443 11:44:08 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:38.443 11:44:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:38.443 11:44:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:38.443 11:44:08 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:38.443 11:44:08 -- nvmf/common.sh@105 -- # continue 2 00:20:38.443 11:44:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:38.443 11:44:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:38.443 11:44:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:38.443 11:44:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:38.443 11:44:08 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:38.443 11:44:08 -- nvmf/common.sh@105 -- # continue 2 00:20:38.443 11:44:08 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:38.443 11:44:08 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:38.443 11:44:08 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:38.443 11:44:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:38.443 11:44:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:38.443 11:44:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:38.443 11:44:08 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:38.443 11:44:08 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:38.443 11:44:08 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:38.443 11:44:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:38.443 11:44:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:38.443 11:44:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:38.443 11:44:08 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:20:38.443 192.168.100.9' 00:20:38.443 11:44:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:38.443 192.168.100.9' 00:20:38.443 11:44:08 -- nvmf/common.sh@446 -- # head -n 1 00:20:38.443 11:44:08 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:38.443 11:44:08 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:20:38.443 192.168.100.9' 00:20:38.443 11:44:08 -- nvmf/common.sh@447 -- # tail -n +2 00:20:38.443 11:44:08 -- nvmf/common.sh@447 -- # head -n 1 00:20:38.443 11:44:08 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:38.443 11:44:08 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:20:38.443 11:44:08 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:38.443 11:44:08 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:20:38.443 11:44:08 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:20:38.443 11:44:08 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:20:38.443 11:44:08 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:38.443 11:44:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:38.443 11:44:08 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:38.443 11:44:08 -- common/autotest_common.sh@10 -- # set +x 00:20:38.443 11:44:08 -- nvmf/common.sh@470 -- # nvmfpid=3089994 00:20:38.443 11:44:08 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:38.443 11:44:08 -- nvmf/common.sh@471 -- # waitforlisten 3089994 00:20:38.443 11:44:08 -- common/autotest_common.sh@827 -- # '[' -z 3089994 ']' 00:20:38.443 11:44:08 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.443 11:44:08 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:38.443 11:44:08 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.443 11:44:08 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:38.444 11:44:08 -- common/autotest_common.sh@10 -- # set +x 00:20:38.444 [2024-05-15 11:44:08.520782] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:20:38.444 [2024-05-15 11:44:08.520847] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.444 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.444 [2024-05-15 11:44:08.595711] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:38.444 [2024-05-15 11:44:08.687935] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.444 [2024-05-15 11:44:08.687973] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.444 [2024-05-15 11:44:08.687983] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.444 [2024-05-15 11:44:08.687992] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.444 [2024-05-15 11:44:08.687999] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.444 [2024-05-15 11:44:08.688049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.444 [2024-05-15 11:44:08.688129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.444 [2024-05-15 11:44:08.688133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.702 11:44:09 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:38.702 11:44:09 -- common/autotest_common.sh@860 -- # return 0 00:20:38.702 11:44:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:38.702 11:44:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.702 11:44:09 -- common/autotest_common.sh@10 -- # set +x 00:20:38.702 11:44:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.702 11:44:09 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:38.961 [2024-05-15 11:44:09.553971] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bed700/0x1bf1bf0) succeed. 00:20:38.961 [2024-05-15 11:44:09.564455] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1beeca0/0x1c33280) succeed. 00:20:38.961 11:44:09 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:39.220 Malloc0 00:20:39.220 11:44:09 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:39.481 11:44:10 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:39.741 11:44:10 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:39.741 [2024-05-15 11:44:10.424674] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:39.741 [2024-05-15 11:44:10.425086] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:39.741 11:44:10 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:40.001 [2024-05-15 11:44:10.609388] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:40.001 11:44:10 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:20:40.260 [2024-05-15 11:44:10.785999] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:20:40.260 11:44:10 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:40.260 11:44:10 -- host/failover.sh@31 -- # bdevperf_pid=3090374 00:20:40.260 11:44:10 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:40.260 11:44:10 -- host/failover.sh@34 -- # waitforlisten 3090374 /var/tmp/bdevperf.sock 00:20:40.260 11:44:10 -- common/autotest_common.sh@827 -- # '[' -z 3090374 ']' 00:20:40.260 11:44:10 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.260 11:44:10 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:40.260 11:44:10 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.260 11:44:10 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:40.260 11:44:10 -- common/autotest_common.sh@10 -- # set +x 00:20:41.197 11:44:11 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:41.197 11:44:11 -- common/autotest_common.sh@860 -- # return 0 00:20:41.197 11:44:11 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:41.197 NVMe0n1 00:20:41.197 11:44:11 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:41.455 00:20:41.455 11:44:12 -- host/failover.sh@39 -- # run_test_pid=3090558 00:20:41.455 11:44:12 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:41.455 11:44:12 -- host/failover.sh@41 -- # sleep 1 00:20:42.831 11:44:13 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:42.831 11:44:13 -- host/failover.sh@45 -- # sleep 3 00:20:46.118 11:44:16 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:46.118 00:20:46.118 11:44:16 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:46.118 11:44:16 -- host/failover.sh@50 -- # sleep 3 00:20:49.405 11:44:19 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:49.405 [2024-05-15 11:44:19.987017] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:49.405 11:44:20 -- host/failover.sh@55 -- # sleep 1 00:20:50.341 11:44:21 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:20:50.600 11:44:21 -- host/failover.sh@59 -- # wait 3090558 00:20:57.172 0 00:20:57.172 11:44:27 -- host/failover.sh@61 -- # killprocess 3090374 00:20:57.172 11:44:27 -- common/autotest_common.sh@946 -- # '[' -z 3090374 ']' 00:20:57.172 11:44:27 -- common/autotest_common.sh@950 -- # kill -0 3090374 00:20:57.172 11:44:27 -- common/autotest_common.sh@951 -- # uname 00:20:57.172 11:44:27 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:57.172 11:44:27 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3090374 00:20:57.172 11:44:27 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:57.172 11:44:27 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:57.172 11:44:27 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3090374' 00:20:57.172 killing process with pid 3090374 00:20:57.172 11:44:27 -- common/autotest_common.sh@965 -- # kill 3090374 00:20:57.172 11:44:27 -- common/autotest_common.sh@970 -- # wait 3090374 00:20:57.172 11:44:27 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:57.172 [2024-05-15 11:44:10.848566] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:20:57.172 [2024-05-15 11:44:10.848628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090374 ] 00:20:57.172 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.172 [2024-05-15 11:44:10.920923] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.172 [2024-05-15 11:44:11.003221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.172 Running I/O for 15 seconds... 00:20:57.172 [2024-05-15 11:44:14.344834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.172 [2024-05-15 11:44:14.344875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.344888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.172 [2024-05-15 11:44:14.344898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.344908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.172 [2024-05-15 11:44:14.344918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.344928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.172 [2024-05-15 11:44:14.344937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.346696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:57.172 [2024-05-15 11:44:14.346712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.172 [2024-05-15 11:44:14.346729] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:20:57.172 [2024-05-15 11:44:14.346740] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:57.172 [2024-05-15 11:44:14.346757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.346768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.346820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.346831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.346863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.346873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.346905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.346915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.346947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.346957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.346993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.347003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.347035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.347045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.347084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.347095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.347126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.347136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.347168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.347178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.347210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.347220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.347251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.347261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.347292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.347302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.347334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.347343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.347375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.347385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.347416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.347426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.347457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.347467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.347498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.347517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.347549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.347559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.347590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.172 [2024-05-15 11:44:14.347600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.172 [2024-05-15 11:44:14.347632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.347642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.347673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.347683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.347715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.347725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.347758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.347768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.347799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.347808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.347840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.347850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.347881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.347891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.347921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.347931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.347963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.347973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.348966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.348997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.349006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.349039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.349051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.349088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.349098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.349129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.349139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.173 [2024-05-15 11:44:14.349171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.173 [2024-05-15 11:44:14.349180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:27480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:27504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:27560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.349963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.349994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.350004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.350044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.350096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.350136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.350177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:27608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.350218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.350260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.350301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.350342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.174 [2024-05-15 11:44:14.350385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x187000 00:20:57.174 [2024-05-15 11:44:14.350428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x187000 00:20:57.174 [2024-05-15 11:44:14.350472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x187000 00:20:57.174 [2024-05-15 11:44:14.350514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x187000 00:20:57.174 [2024-05-15 11:44:14.350557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x187000 00:20:57.174 [2024-05-15 11:44:14.350601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x187000 00:20:57.174 [2024-05-15 11:44:14.350643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x187000 00:20:57.174 [2024-05-15 11:44:14.350686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x187000 00:20:57.174 [2024-05-15 11:44:14.350728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x187000 00:20:57.174 [2024-05-15 11:44:14.350769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.174 [2024-05-15 11:44:14.350801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.350811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.350843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.350853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.350884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.350895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.350926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.350936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.350968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.350978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.351987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.351997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.352030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.352040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.352079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187000 00:20:57.175 [2024-05-15 11:44:14.352090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:162a5c30 sqhd:0030 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.366754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.175 [2024-05-15 11:44:14.366772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.175 [2024-05-15 11:44:14.366782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26944 len:8 PRP1 0x0 PRP2 0x0 00:20:57.175 [2024-05-15 11:44:14.366792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:14.366865] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:20:57.175 [2024-05-15 11:44:14.366876] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:57.175 [2024-05-15 11:44:14.366906] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:57.175 [2024-05-15 11:44:14.369684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.175 [2024-05-15 11:44:14.413706] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:57.175 [2024-05-15 11:44:17.797082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.175 [2024-05-15 11:44:17.797125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:17.797145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.175 [2024-05-15 11:44:17.797155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.175 [2024-05-15 11:44:17.797167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.176 [2024-05-15 11:44:17.797177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.176 [2024-05-15 11:44:17.797198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.176 [2024-05-15 11:44:17.797218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.176 [2024-05-15 11:44:17.797239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.176 [2024-05-15 11:44:17.797264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187000 00:20:57.176 [2024-05-15 11:44:17.797285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187000 00:20:57.176 [2024-05-15 11:44:17.797306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x187000 00:20:57.176 [2024-05-15 11:44:17.797327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x187000 00:20:57.176 [2024-05-15 11:44:17.797347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x187000 00:20:57.176 [2024-05-15 11:44:17.797367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:119896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x187000 00:20:57.176 [2024-05-15 11:44:17.797387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x187000 00:20:57.176 [2024-05-15 11:44:17.797407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187000 00:20:57.176 [2024-05-15 11:44:17.797428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.176 [2024-05-15 11:44:17.797448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.176 [2024-05-15 11:44:17.797468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.176 [2024-05-15 11:44:17.797490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.176 [2024-05-15 11:44:17.797512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.176 [2024-05-15 11:44:17.797532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.176 [2024-05-15 11:44:17.797552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.176 [2024-05-15 11:44:17.797572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.176 [2024-05-15 11:44:17.797592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x187000 00:20:57.176 [2024-05-15 11:44:17.797612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x187000 00:20:57.176 [2024-05-15 11:44:17.797633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x187000 00:20:57.176 [2024-05-15 11:44:17.797653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x187000 00:20:57.176 [2024-05-15 11:44:17.797674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x187000 00:20:57.176 [2024-05-15 11:44:17.797694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187000 00:20:57.176 [2024-05-15 11:44:17.797714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x187000 00:20:57.176 [2024-05-15 11:44:17.797736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.176 [2024-05-15 11:44:17.797747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x187000 00:20:57.176 [2024-05-15 11:44:17.797756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.797767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.177 [2024-05-15 11:44:17.797776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.797787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.177 [2024-05-15 11:44:17.797796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.797807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.177 [2024-05-15 11:44:17.797817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.797827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.177 [2024-05-15 11:44:17.797836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.797847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.177 [2024-05-15 11:44:17.797856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.797867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.177 [2024-05-15 11:44:17.797876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.797887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.177 [2024-05-15 11:44:17.797896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.797906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.177 [2024-05-15 11:44:17.797915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.797926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.177 [2024-05-15 11:44:17.797935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.797946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.177 [2024-05-15 11:44:17.797956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.797967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.177 [2024-05-15 11:44:17.797978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.797988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.177 [2024-05-15 11:44:17.797998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.177 [2024-05-15 11:44:17.798018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.177 [2024-05-15 11:44:17.798038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.177 [2024-05-15 11:44:17.798062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.177 [2024-05-15 11:44:17.798082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:119984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.177 [2024-05-15 11:44:17.798522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187000 00:20:57.177 [2024-05-15 11:44:17.798532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.798552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.798572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.178 [2024-05-15 11:44:17.798592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.178 [2024-05-15 11:44:17.798612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.178 [2024-05-15 11:44:17.798632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.178 [2024-05-15 11:44:17.798652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.178 [2024-05-15 11:44:17.798671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.178 [2024-05-15 11:44:17.798694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.178 [2024-05-15 11:44:17.798714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.178 [2024-05-15 11:44:17.798734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.798754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.798774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.798794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.798814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.798834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.798854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.798874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.798894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.798916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.798936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.798956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.798976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.798988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.798997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.799008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.799018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.799029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.799038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.799049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.799063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.799074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.178 [2024-05-15 11:44:17.799083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.799094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.178 [2024-05-15 11:44:17.799103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.799114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.178 [2024-05-15 11:44:17.799123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.799134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.178 [2024-05-15 11:44:17.799143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.799156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.178 [2024-05-15 11:44:17.799165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.799176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.178 [2024-05-15 11:44:17.799185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.799196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.178 [2024-05-15 11:44:17.799205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.799216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.178 [2024-05-15 11:44:17.799225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.799236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.799245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.799256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.799265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.178 [2024-05-15 11:44:17.799276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187000 00:20:57.178 [2024-05-15 11:44:17.799286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.799708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x187000 00:20:57.179 [2024-05-15 11:44:17.799718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.801676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.179 [2024-05-15 11:44:17.801690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.179 [2024-05-15 11:44:17.801699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120496 len:8 PRP1 0x0 PRP2 0x0 00:20:57.179 [2024-05-15 11:44:17.801709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:17.801751] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:20:57.179 [2024-05-15 11:44:17.801763] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:20:57.179 [2024-05-15 11:44:17.801774] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.179 [2024-05-15 11:44:17.804592] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.179 [2024-05-15 11:44:17.819033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:57.179 [2024-05-15 11:44:17.866679] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:57.179 [2024-05-15 11:44:22.193619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.179 [2024-05-15 11:44:22.193665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:22.193689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.179 [2024-05-15 11:44:22.193700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:22.193712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.179 [2024-05-15 11:44:22.193727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:22.193739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.179 [2024-05-15 11:44:22.193748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:22.193759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.179 [2024-05-15 11:44:22.193768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:22.193779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.179 [2024-05-15 11:44:22.193789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:22.193799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.179 [2024-05-15 11:44:22.193809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:22.193819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.179 [2024-05-15 11:44:22.193829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:22.193840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.179 [2024-05-15 11:44:22.193849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:22.193860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.179 [2024-05-15 11:44:22.193869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:22.193879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.179 [2024-05-15 11:44:22.193889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:22.193899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.179 [2024-05-15 11:44:22.193908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.179 [2024-05-15 11:44:22.193919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.179 [2024-05-15 11:44:22.193928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.193939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.193948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.193959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.193968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.193981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.193991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187000 00:20:57.180 [2024-05-15 11:44:22.194341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x187000 00:20:57.180 [2024-05-15 11:44:22.194362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x187000 00:20:57.180 [2024-05-15 11:44:22.194382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x187000 00:20:57.180 [2024-05-15 11:44:22.194402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x187000 00:20:57.180 [2024-05-15 11:44:22.194423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x187000 00:20:57.180 [2024-05-15 11:44:22.194443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x187000 00:20:57.180 [2024-05-15 11:44:22.194463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.180 [2024-05-15 11:44:22.194604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x187000 00:20:57.180 [2024-05-15 11:44:22.194624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187000 00:20:57.180 [2024-05-15 11:44:22.194644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187000 00:20:57.180 [2024-05-15 11:44:22.194664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187000 00:20:57.180 [2024-05-15 11:44:22.194684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187000 00:20:57.180 [2024-05-15 11:44:22.194706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187000 00:20:57.180 [2024-05-15 11:44:22.194726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.180 [2024-05-15 11:44:22.194737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187000 00:20:57.181 [2024-05-15 11:44:22.194746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.194757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187000 00:20:57.181 [2024-05-15 11:44:22.194767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.194778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187000 00:20:57.181 [2024-05-15 11:44:22.194787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.194798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x187000 00:20:57.181 [2024-05-15 11:44:22.194807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.194818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x187000 00:20:57.181 [2024-05-15 11:44:22.194827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.194838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x187000 00:20:57.181 [2024-05-15 11:44:22.194848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.194858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x187000 00:20:57.181 [2024-05-15 11:44:22.194867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.194880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x187000 00:20:57.181 [2024-05-15 11:44:22.194889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.194900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x187000 00:20:57.181 [2024-05-15 11:44:22.194909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.194920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x187000 00:20:57.181 [2024-05-15 11:44:22.194929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.194942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.194951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.194962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.194971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.194982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.194991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.181 [2024-05-15 11:44:22.195347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.181 [2024-05-15 11:44:22.195356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.195439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.182 [2024-05-15 11:44:22.195759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.195779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.195800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.195820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.195840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.195860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.195880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.195901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.195926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.195946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.195966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.195986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.195997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.196007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.196018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.196027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.196038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.196047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.196061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.196070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.196081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.196090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.196101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.196111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.182 [2024-05-15 11:44:22.196121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x187000 00:20:57.182 [2024-05-15 11:44:22.196131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.183 [2024-05-15 11:44:22.196142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x187000 00:20:57.183 [2024-05-15 11:44:22.196153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.183 [2024-05-15 11:44:22.196164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x187000 00:20:57.183 [2024-05-15 11:44:22.196173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.183 [2024-05-15 11:44:22.196184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187000 00:20:57.183 [2024-05-15 11:44:22.196193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.183 [2024-05-15 11:44:22.196204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187000 00:20:57.183 [2024-05-15 11:44:22.196214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.183 [2024-05-15 11:44:22.196224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187000 00:20:57.183 [2024-05-15 11:44:22.196234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.183 [2024-05-15 11:44:22.196244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x187000 00:20:57.183 [2024-05-15 11:44:22.196254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:5e20 p:0 m:0 dnr:0 00:20:57.183 [2024-05-15 11:44:22.198126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.183 [2024-05-15 11:44:22.198139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.183 [2024-05-15 11:44:22.198148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97832 len:8 PRP1 0x0 PRP2 0x0 00:20:57.183 [2024-05-15 11:44:22.198158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.183 [2024-05-15 11:44:22.198201] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:20:57.183 [2024-05-15 11:44:22.198215] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:20:57.183 [2024-05-15 11:44:22.198225] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.183 [2024-05-15 11:44:22.201026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.183 [2024-05-15 11:44:22.215127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:57.183 [2024-05-15 11:44:22.258945] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:57.183 00:20:57.183 Latency(us) 00:20:57.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.183 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:57.183 Verification LBA range: start 0x0 length 0x4000 00:20:57.183 NVMe0n1 : 15.01 14444.75 56.42 320.13 0.00 8650.84 357.95 1035810.73 00:20:57.183 =================================================================================================================== 00:20:57.183 Total : 14444.75 56.42 320.13 0.00 8650.84 357.95 1035810.73 00:20:57.183 Received shutdown signal, test time was about 15.000000 seconds 00:20:57.183 00:20:57.183 Latency(us) 00:20:57.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.183 =================================================================================================================== 00:20:57.183 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.183 11:44:27 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:57.183 11:44:27 -- host/failover.sh@65 -- # count=3 00:20:57.183 11:44:27 -- host/failover.sh@67 -- # (( count != 3 )) 00:20:57.183 11:44:27 -- host/failover.sh@73 -- # bdevperf_pid=3092552 00:20:57.183 11:44:27 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:57.183 11:44:27 -- host/failover.sh@75 -- # waitforlisten 3092552 /var/tmp/bdevperf.sock 00:20:57.183 11:44:27 -- common/autotest_common.sh@827 -- # '[' -z 3092552 ']' 00:20:57.183 11:44:27 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.183 11:44:27 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:57.183 11:44:27 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.183 11:44:27 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:57.183 11:44:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.751 11:44:28 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:57.751 11:44:28 -- common/autotest_common.sh@860 -- # return 0 00:20:57.751 11:44:28 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:58.009 [2024-05-15 11:44:28.662553] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:58.010 11:44:28 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:20:58.268 [2024-05-15 11:44:28.847184] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:20:58.268 11:44:28 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:58.527 NVMe0n1 00:20:58.527 11:44:29 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:58.785 00:20:58.785 11:44:29 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:59.044 00:20:59.044 11:44:29 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:59.044 11:44:29 -- host/failover.sh@82 -- # grep -q NVMe0 00:20:59.302 11:44:29 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:59.302 11:44:29 -- host/failover.sh@87 -- # sleep 3 00:21:02.585 11:44:32 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:02.585 11:44:32 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:02.585 11:44:33 -- host/failover.sh@90 -- # run_test_pid=3093293 00:21:02.585 11:44:33 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:02.585 11:44:33 -- host/failover.sh@92 -- # wait 3093293 00:21:03.521 0 00:21:03.779 11:44:34 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:03.779 [2024-05-15 11:44:27.684677] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:21:03.779 [2024-05-15 11:44:27.684745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092552 ] 00:21:03.779 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.779 [2024-05-15 11:44:27.755345] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.779 [2024-05-15 11:44:27.838113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.779 [2024-05-15 11:44:29.954499] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:21:03.779 [2024-05-15 11:44:29.955154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.779 [2024-05-15 11:44:29.955185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.779 [2024-05-15 11:44:29.979191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:03.779 [2024-05-15 11:44:29.995477] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:03.779 Running I/O for 1 seconds... 00:21:03.779 00:21:03.779 Latency(us) 00:21:03.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.779 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:03.779 Verification LBA range: start 0x0 length 0x4000 00:21:03.779 NVMe0n1 : 1.01 18182.00 71.02 0.00 0.00 7000.94 2493.22 14702.86 00:21:03.779 =================================================================================================================== 00:21:03.779 Total : 18182.00 71.02 0.00 0.00 7000.94 2493.22 14702.86 00:21:03.779 11:44:34 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:03.779 11:44:34 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:03.779 11:44:34 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:04.038 11:44:34 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:04.038 11:44:34 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:04.296 11:44:34 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:04.554 11:44:35 -- host/failover.sh@101 -- # sleep 3 00:21:07.850 11:44:38 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:07.850 11:44:38 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:07.850 11:44:38 -- host/failover.sh@108 -- # killprocess 3092552 00:21:07.850 11:44:38 -- common/autotest_common.sh@946 -- # '[' -z 3092552 ']' 00:21:07.850 11:44:38 -- common/autotest_common.sh@950 -- # kill -0 3092552 00:21:07.850 11:44:38 -- common/autotest_common.sh@951 -- # uname 00:21:07.850 11:44:38 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:07.850 11:44:38 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3092552 00:21:07.850 11:44:38 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:07.850 11:44:38 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:07.850 11:44:38 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3092552' 00:21:07.850 killing process with pid 3092552 00:21:07.850 11:44:38 -- common/autotest_common.sh@965 -- # kill 3092552 00:21:07.850 11:44:38 -- common/autotest_common.sh@970 -- # wait 3092552 00:21:07.850 11:44:38 -- host/failover.sh@110 -- # sync 00:21:07.850 11:44:38 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:08.109 11:44:38 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:08.109 11:44:38 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:08.109 11:44:38 -- host/failover.sh@116 -- # nvmftestfini 00:21:08.109 11:44:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:08.109 11:44:38 -- nvmf/common.sh@117 -- # sync 00:21:08.109 11:44:38 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:08.109 11:44:38 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:08.109 11:44:38 -- nvmf/common.sh@120 -- # set +e 00:21:08.109 11:44:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:08.109 11:44:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:08.109 rmmod nvme_rdma 00:21:08.109 rmmod nvme_fabrics 00:21:08.109 11:44:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:08.109 11:44:38 -- nvmf/common.sh@124 -- # set -e 00:21:08.109 11:44:38 -- nvmf/common.sh@125 -- # return 0 00:21:08.109 11:44:38 -- nvmf/common.sh@478 -- # '[' -n 3089994 ']' 00:21:08.109 11:44:38 -- nvmf/common.sh@479 -- # killprocess 3089994 00:21:08.109 11:44:38 -- common/autotest_common.sh@946 -- # '[' -z 3089994 ']' 00:21:08.109 11:44:38 -- common/autotest_common.sh@950 -- # kill -0 3089994 00:21:08.109 11:44:38 -- common/autotest_common.sh@951 -- # uname 00:21:08.109 11:44:38 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:08.109 11:44:38 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3089994 00:21:08.109 11:44:38 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:08.109 11:44:38 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:08.109 11:44:38 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3089994' 00:21:08.109 killing process with pid 3089994 00:21:08.109 11:44:38 -- common/autotest_common.sh@965 -- # kill 3089994 00:21:08.109 [2024-05-15 11:44:38.843379] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:08.109 11:44:38 -- common/autotest_common.sh@970 -- # wait 3089994 00:21:08.368 [2024-05-15 11:44:38.911567] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:21:08.695 11:44:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:08.695 11:44:39 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:21:08.695 00:21:08.695 real 0m36.737s 00:21:08.695 user 2m4.266s 00:21:08.695 sys 0m7.038s 00:21:08.695 11:44:39 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:08.695 11:44:39 -- common/autotest_common.sh@10 -- # set +x 00:21:08.695 ************************************ 00:21:08.695 END TEST nvmf_failover 00:21:08.695 ************************************ 00:21:08.695 11:44:39 -- nvmf/nvmf.sh@99 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:21:08.695 11:44:39 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:08.695 11:44:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:08.695 11:44:39 -- common/autotest_common.sh@10 -- # set +x 00:21:08.695 ************************************ 00:21:08.695 START TEST nvmf_host_discovery 00:21:08.695 ************************************ 00:21:08.695 11:44:39 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:21:08.695 * Looking for test storage... 00:21:08.695 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:08.695 11:44:39 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.695 11:44:39 -- nvmf/common.sh@7 -- # uname -s 00:21:08.695 11:44:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.695 11:44:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.695 11:44:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.695 11:44:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.695 11:44:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.695 11:44:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.695 11:44:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.695 11:44:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.695 11:44:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.695 11:44:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.695 11:44:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:21:08.695 11:44:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:21:08.695 11:44:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.695 11:44:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.695 11:44:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.695 11:44:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.695 11:44:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:08.695 11:44:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.695 11:44:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.695 11:44:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.695 11:44:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.695 11:44:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.695 11:44:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.695 11:44:39 -- paths/export.sh@5 -- # export PATH 00:21:08.695 11:44:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.695 11:44:39 -- nvmf/common.sh@47 -- # : 0 00:21:08.695 11:44:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:08.695 11:44:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:08.695 11:44:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.695 11:44:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.695 11:44:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.695 11:44:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:08.695 11:44:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:08.695 11:44:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:08.695 11:44:39 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:21:08.695 11:44:39 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:21:08.695 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:21:08.695 11:44:39 -- host/discovery.sh@13 -- # exit 0 00:21:08.695 00:21:08.695 real 0m0.105s 00:21:08.695 user 0m0.042s 00:21:08.695 sys 0m0.071s 00:21:08.695 11:44:39 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:08.695 11:44:39 -- common/autotest_common.sh@10 -- # set +x 00:21:08.695 ************************************ 00:21:08.695 END TEST nvmf_host_discovery 00:21:08.695 ************************************ 00:21:08.695 11:44:39 -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:21:08.695 11:44:39 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:08.695 11:44:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:08.695 11:44:39 -- common/autotest_common.sh@10 -- # set +x 00:21:08.695 ************************************ 00:21:08.695 START TEST nvmf_host_multipath_status 00:21:08.695 ************************************ 00:21:08.695 11:44:39 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:21:08.980 * Looking for test storage... 00:21:08.980 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:08.980 11:44:39 -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.980 11:44:39 -- nvmf/common.sh@7 -- # uname -s 00:21:08.980 11:44:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.980 11:44:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.980 11:44:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.980 11:44:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.980 11:44:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.980 11:44:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.980 11:44:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.980 11:44:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.980 11:44:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.980 11:44:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.980 11:44:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:21:08.980 11:44:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:21:08.980 11:44:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.980 11:44:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.980 11:44:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.980 11:44:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.980 11:44:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:08.980 11:44:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.980 11:44:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.980 11:44:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.980 11:44:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.980 11:44:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.980 11:44:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.980 11:44:39 -- paths/export.sh@5 -- # export PATH 00:21:08.980 11:44:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.980 11:44:39 -- nvmf/common.sh@47 -- # : 0 00:21:08.980 11:44:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:08.980 11:44:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:08.980 11:44:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.980 11:44:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.980 11:44:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.980 11:44:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:08.980 11:44:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:08.980 11:44:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:08.980 11:44:39 -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:08.980 11:44:39 -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:08.980 11:44:39 -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:08.980 11:44:39 -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:21:08.980 11:44:39 -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:08.981 11:44:39 -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:08.981 11:44:39 -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:08.981 11:44:39 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:21:08.981 11:44:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.981 11:44:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:08.981 11:44:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:08.981 11:44:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:08.981 11:44:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.981 11:44:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.981 11:44:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.981 11:44:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:08.981 11:44:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:08.981 11:44:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:08.981 11:44:39 -- common/autotest_common.sh@10 -- # set +x 00:21:15.549 11:44:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:15.549 11:44:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:15.549 11:44:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:15.549 11:44:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:15.549 11:44:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:15.549 11:44:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:15.549 11:44:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:15.549 11:44:45 -- nvmf/common.sh@295 -- # net_devs=() 00:21:15.549 11:44:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:15.549 11:44:45 -- nvmf/common.sh@296 -- # e810=() 00:21:15.549 11:44:45 -- nvmf/common.sh@296 -- # local -ga e810 00:21:15.549 11:44:45 -- nvmf/common.sh@297 -- # x722=() 00:21:15.549 11:44:45 -- nvmf/common.sh@297 -- # local -ga x722 00:21:15.549 11:44:45 -- nvmf/common.sh@298 -- # mlx=() 00:21:15.549 11:44:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:15.549 11:44:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:15.549 11:44:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:15.549 11:44:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:15.549 11:44:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:15.549 11:44:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:15.549 11:44:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:15.549 11:44:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:15.549 11:44:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:15.549 11:44:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:15.549 11:44:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:15.549 11:44:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:15.549 11:44:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:15.549 11:44:45 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:15.549 11:44:45 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:15.549 11:44:45 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:15.549 11:44:45 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:15.549 11:44:45 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:15.549 11:44:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:15.549 11:44:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.549 11:44:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:15.549 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:15.549 11:44:45 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:15.550 11:44:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.550 11:44:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:15.550 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:15.550 11:44:45 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:15.550 11:44:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:15.550 11:44:45 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.550 11:44:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.550 11:44:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:15.550 11:44:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.550 11:44:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:15.550 Found net devices under 0000:18:00.0: mlx_0_0 00:21:15.550 11:44:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.550 11:44:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.550 11:44:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.550 11:44:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:15.550 11:44:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.550 11:44:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:15.550 Found net devices under 0000:18:00.1: mlx_0_1 00:21:15.550 11:44:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.550 11:44:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:15.550 11:44:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:15.550 11:44:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@409 -- # rdma_device_init 00:21:15.550 11:44:45 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:21:15.550 11:44:45 -- nvmf/common.sh@58 -- # uname 00:21:15.550 11:44:45 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:15.550 11:44:45 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:15.550 11:44:45 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:15.550 11:44:45 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:15.550 11:44:45 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:15.550 11:44:45 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:15.550 11:44:45 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:15.550 11:44:45 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:15.550 11:44:45 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:21:15.550 11:44:45 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:15.550 11:44:45 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:15.550 11:44:45 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:15.550 11:44:45 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:15.550 11:44:45 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:15.550 11:44:45 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:15.550 11:44:45 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:15.550 11:44:45 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:15.550 11:44:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.550 11:44:45 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:15.550 11:44:45 -- nvmf/common.sh@105 -- # continue 2 00:21:15.550 11:44:45 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:15.550 11:44:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.550 11:44:45 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.550 11:44:45 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:15.550 11:44:45 -- nvmf/common.sh@105 -- # continue 2 00:21:15.550 11:44:45 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:15.550 11:44:45 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:15.550 11:44:45 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:15.550 11:44:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:15.550 11:44:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:15.550 11:44:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:15.550 11:44:45 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:15.550 11:44:45 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:15.550 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:15.550 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:21:15.550 altname enp24s0f0np0 00:21:15.550 altname ens785f0np0 00:21:15.550 inet 192.168.100.8/24 scope global mlx_0_0 00:21:15.550 valid_lft forever preferred_lft forever 00:21:15.550 11:44:45 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:15.550 11:44:45 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:15.550 11:44:45 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:15.550 11:44:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:15.550 11:44:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:15.550 11:44:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:15.550 11:44:45 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:15.550 11:44:45 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:15.550 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:15.550 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:21:15.550 altname enp24s0f1np1 00:21:15.550 altname ens785f1np1 00:21:15.550 inet 192.168.100.9/24 scope global mlx_0_1 00:21:15.550 valid_lft forever preferred_lft forever 00:21:15.550 11:44:45 -- nvmf/common.sh@411 -- # return 0 00:21:15.550 11:44:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:15.550 11:44:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:15.550 11:44:45 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:21:15.550 11:44:45 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:15.550 11:44:45 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:15.550 11:44:45 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:15.550 11:44:45 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:15.550 11:44:45 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:15.550 11:44:45 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:15.550 11:44:45 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:15.550 11:44:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.550 11:44:45 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:15.550 11:44:45 -- nvmf/common.sh@105 -- # continue 2 00:21:15.550 11:44:45 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:15.550 11:44:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.550 11:44:45 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:15.550 11:44:45 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:15.550 11:44:45 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:15.550 11:44:45 -- nvmf/common.sh@105 -- # continue 2 00:21:15.550 11:44:45 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:15.550 11:44:45 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:15.550 11:44:45 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:15.550 11:44:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:15.550 11:44:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:15.550 11:44:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:15.550 11:44:45 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:15.550 11:44:45 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:15.550 11:44:45 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:15.550 11:44:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:15.550 11:44:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:15.550 11:44:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:15.550 11:44:45 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:21:15.550 192.168.100.9' 00:21:15.550 11:44:45 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:15.550 192.168.100.9' 00:21:15.550 11:44:45 -- nvmf/common.sh@446 -- # head -n 1 00:21:15.550 11:44:45 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:15.550 11:44:45 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:21:15.550 192.168.100.9' 00:21:15.550 11:44:45 -- nvmf/common.sh@447 -- # head -n 1 00:21:15.550 11:44:45 -- nvmf/common.sh@447 -- # tail -n +2 00:21:15.550 11:44:45 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:15.550 11:44:45 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:21:15.550 11:44:45 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:15.550 11:44:45 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:21:15.550 11:44:45 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:21:15.550 11:44:45 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:21:15.550 11:44:45 -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:15.550 11:44:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:15.550 11:44:45 -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:15.550 11:44:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.550 11:44:45 -- nvmf/common.sh@470 -- # nvmfpid=3096978 00:21:15.550 11:44:45 -- nvmf/common.sh@471 -- # waitforlisten 3096978 00:21:15.550 11:44:45 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:15.550 11:44:45 -- common/autotest_common.sh@827 -- # '[' -z 3096978 ']' 00:21:15.550 11:44:45 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.550 11:44:45 -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:15.550 11:44:45 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.550 11:44:45 -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:15.550 11:44:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.550 [2024-05-15 11:44:45.886035] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:21:15.550 [2024-05-15 11:44:45.886103] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.551 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.551 [2024-05-15 11:44:45.957108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:15.551 [2024-05-15 11:44:46.044648] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.551 [2024-05-15 11:44:46.044689] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.551 [2024-05-15 11:44:46.044699] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.551 [2024-05-15 11:44:46.044724] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.551 [2024-05-15 11:44:46.044731] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.551 [2024-05-15 11:44:46.044784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.551 [2024-05-15 11:44:46.044788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.119 11:44:46 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:16.119 11:44:46 -- common/autotest_common.sh@860 -- # return 0 00:21:16.119 11:44:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:16.119 11:44:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.119 11:44:46 -- common/autotest_common.sh@10 -- # set +x 00:21:16.119 11:44:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.119 11:44:46 -- host/multipath_status.sh@34 -- # nvmfapp_pid=3096978 00:21:16.119 11:44:46 -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:16.378 [2024-05-15 11:44:46.902646] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6ec930/0x6f0e20) succeed. 00:21:16.378 [2024-05-15 11:44:46.912004] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6ede30/0x7324b0) succeed. 00:21:16.378 11:44:46 -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:16.636 Malloc0 00:21:16.636 11:44:47 -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:16.636 11:44:47 -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:16.895 11:44:47 -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:17.155 [2024-05-15 11:44:47.700074] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:17.155 [2024-05-15 11:44:47.700414] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:17.155 11:44:47 -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:17.155 [2024-05-15 11:44:47.880687] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:17.155 11:44:47 -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:17.155 11:44:47 -- host/multipath_status.sh@45 -- # bdevperf_pid=3097355 00:21:17.155 11:44:47 -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:17.155 11:44:47 -- host/multipath_status.sh@48 -- # waitforlisten 3097355 /var/tmp/bdevperf.sock 00:21:17.155 11:44:47 -- common/autotest_common.sh@827 -- # '[' -z 3097355 ']' 00:21:17.155 11:44:47 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.155 11:44:47 -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:17.155 11:44:47 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.155 11:44:47 -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:17.155 11:44:47 -- common/autotest_common.sh@10 -- # set +x 00:21:18.092 11:44:48 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:18.092 11:44:48 -- common/autotest_common.sh@860 -- # return 0 00:21:18.092 11:44:48 -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:18.350 11:44:48 -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:18.609 Nvme0n1 00:21:18.609 11:44:49 -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:18.868 Nvme0n1 00:21:18.868 11:44:49 -- host/multipath_status.sh@78 -- # sleep 2 00:21:18.868 11:44:49 -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:21.010 11:44:51 -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:21.010 11:44:51 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:21:21.010 11:44:51 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:21.268 11:44:51 -- host/multipath_status.sh@91 -- # sleep 1 00:21:22.205 11:44:52 -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:22.205 11:44:52 -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:22.205 11:44:52 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.205 11:44:52 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:22.464 11:44:52 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.464 11:44:52 -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:22.464 11:44:52 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.464 11:44:52 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:22.464 11:44:53 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:22.464 11:44:53 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:22.464 11:44:53 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.464 11:44:53 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:22.723 11:44:53 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.723 11:44:53 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:22.723 11:44:53 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.723 11:44:53 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:22.982 11:44:53 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.982 11:44:53 -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:22.982 11:44:53 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:22.982 11:44:53 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.982 11:44:53 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.982 11:44:53 -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:23.241 11:44:53 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:23.241 11:44:53 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:23.241 11:44:53 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:23.241 11:44:53 -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:23.241 11:44:53 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:23.500 11:44:54 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:23.759 11:44:54 -- host/multipath_status.sh@95 -- # sleep 1 00:21:24.694 11:44:55 -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:24.694 11:44:55 -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:24.694 11:44:55 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:24.694 11:44:55 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:24.952 11:44:55 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:24.952 11:44:55 -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:24.952 11:44:55 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:24.952 11:44:55 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:24.952 11:44:55 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:24.952 11:44:55 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:24.952 11:44:55 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:24.952 11:44:55 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:25.212 11:44:55 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.212 11:44:55 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:25.212 11:44:55 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.212 11:44:55 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:25.212 11:44:55 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.212 11:44:55 -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:25.212 11:44:55 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.212 11:44:55 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:25.471 11:44:56 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.471 11:44:56 -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:25.471 11:44:56 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.471 11:44:56 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:25.729 11:44:56 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.729 11:44:56 -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:25.729 11:44:56 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:25.988 11:44:56 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:21:25.988 11:44:56 -- host/multipath_status.sh@101 -- # sleep 1 00:21:27.366 11:44:57 -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:27.366 11:44:57 -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:27.366 11:44:57 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.366 11:44:57 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:27.366 11:44:57 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.366 11:44:57 -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:27.366 11:44:57 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:27.366 11:44:57 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.366 11:44:58 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:27.366 11:44:58 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:27.366 11:44:58 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.366 11:44:58 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:27.625 11:44:58 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.625 11:44:58 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:27.625 11:44:58 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:27.625 11:44:58 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.884 11:44:58 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.884 11:44:58 -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:27.884 11:44:58 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.884 11:44:58 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:27.884 11:44:58 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.884 11:44:58 -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:27.884 11:44:58 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.884 11:44:58 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:28.143 11:44:58 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:28.143 11:44:58 -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:28.143 11:44:58 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:28.402 11:44:58 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:21:28.402 11:44:59 -- host/multipath_status.sh@105 -- # sleep 1 00:21:29.778 11:45:00 -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:29.778 11:45:00 -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:29.778 11:45:00 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.778 11:45:00 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:29.778 11:45:00 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.778 11:45:00 -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:29.778 11:45:00 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.778 11:45:00 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:29.778 11:45:00 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:29.778 11:45:00 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:29.778 11:45:00 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.778 11:45:00 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:30.037 11:45:00 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.037 11:45:00 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:30.037 11:45:00 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.037 11:45:00 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:30.296 11:45:00 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.296 11:45:00 -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:30.296 11:45:00 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.296 11:45:00 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:30.555 11:45:01 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.555 11:45:01 -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:30.555 11:45:01 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.555 11:45:01 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:30.555 11:45:01 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:30.555 11:45:01 -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:30.555 11:45:01 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:21:30.813 11:45:01 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:21:31.072 11:45:01 -- host/multipath_status.sh@109 -- # sleep 1 00:21:32.008 11:45:02 -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:32.008 11:45:02 -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:32.008 11:45:02 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.008 11:45:02 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:32.267 11:45:02 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:32.267 11:45:02 -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:32.267 11:45:02 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.267 11:45:02 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:32.538 11:45:03 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:32.538 11:45:03 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:32.538 11:45:03 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.539 11:45:03 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:32.539 11:45:03 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.539 11:45:03 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:32.539 11:45:03 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.539 11:45:03 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:32.804 11:45:03 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.804 11:45:03 -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:32.804 11:45:03 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.804 11:45:03 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:33.062 11:45:03 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:33.062 11:45:03 -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:33.062 11:45:03 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:33.062 11:45:03 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.062 11:45:03 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:33.062 11:45:03 -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:33.062 11:45:03 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:21:33.321 11:45:03 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:33.580 11:45:04 -- host/multipath_status.sh@113 -- # sleep 1 00:21:34.517 11:45:05 -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:34.517 11:45:05 -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:34.517 11:45:05 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.517 11:45:05 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:34.776 11:45:05 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:34.776 11:45:05 -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:34.776 11:45:05 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.776 11:45:05 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:35.035 11:45:05 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:35.035 11:45:05 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:35.035 11:45:05 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.035 11:45:05 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:35.035 11:45:05 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:35.035 11:45:05 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:35.035 11:45:05 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.035 11:45:05 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:35.293 11:45:05 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:35.294 11:45:05 -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:35.294 11:45:05 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.294 11:45:05 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:35.553 11:45:06 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:35.553 11:45:06 -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:35.553 11:45:06 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.553 11:45:06 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:35.553 11:45:06 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:35.553 11:45:06 -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:35.812 11:45:06 -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:35.812 11:45:06 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:21:36.108 11:45:06 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:36.391 11:45:06 -- host/multipath_status.sh@120 -- # sleep 1 00:21:37.329 11:45:07 -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:37.329 11:45:07 -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:37.329 11:45:07 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.329 11:45:07 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:37.329 11:45:08 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:37.329 11:45:08 -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:37.329 11:45:08 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.329 11:45:08 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:37.588 11:45:08 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:37.588 11:45:08 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:37.588 11:45:08 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.588 11:45:08 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:37.847 11:45:08 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:37.847 11:45:08 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:37.847 11:45:08 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.847 11:45:08 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:38.107 11:45:08 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:38.107 11:45:08 -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:38.107 11:45:08 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:38.107 11:45:08 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:38.107 11:45:08 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:38.107 11:45:08 -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:38.107 11:45:08 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:38.107 11:45:08 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:38.365 11:45:08 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:38.365 11:45:08 -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:38.365 11:45:08 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:38.624 11:45:09 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:38.624 11:45:09 -- host/multipath_status.sh@124 -- # sleep 1 00:21:40.004 11:45:10 -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:40.004 11:45:10 -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:40.004 11:45:10 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.004 11:45:10 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:40.004 11:45:10 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:40.004 11:45:10 -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:40.004 11:45:10 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.004 11:45:10 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:40.004 11:45:10 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:40.004 11:45:10 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:40.004 11:45:10 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:40.004 11:45:10 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.263 11:45:10 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:40.263 11:45:10 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:40.263 11:45:10 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.263 11:45:10 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:40.522 11:45:11 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:40.522 11:45:11 -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:40.522 11:45:11 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.522 11:45:11 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:40.522 11:45:11 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:40.522 11:45:11 -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:40.780 11:45:11 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.780 11:45:11 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:40.780 11:45:11 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:40.780 11:45:11 -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:40.780 11:45:11 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:41.042 11:45:11 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:21:41.303 11:45:11 -- host/multipath_status.sh@130 -- # sleep 1 00:21:42.239 11:45:12 -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:42.239 11:45:12 -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:42.239 11:45:12 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.239 11:45:12 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:42.498 11:45:13 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.498 11:45:13 -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:42.498 11:45:13 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.498 11:45:13 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:42.498 11:45:13 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.498 11:45:13 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:42.498 11:45:13 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.498 11:45:13 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:42.757 11:45:13 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.757 11:45:13 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:42.757 11:45:13 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.757 11:45:13 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:43.016 11:45:13 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:43.016 11:45:13 -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:43.016 11:45:13 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:43.016 11:45:13 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:43.016 11:45:13 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:43.016 11:45:13 -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:43.016 11:45:13 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:43.016 11:45:13 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:43.274 11:45:13 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:43.274 11:45:13 -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:43.274 11:45:13 -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:43.533 11:45:14 -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:21:43.792 11:45:14 -- host/multipath_status.sh@134 -- # sleep 1 00:21:44.729 11:45:15 -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:44.729 11:45:15 -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:44.729 11:45:15 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.729 11:45:15 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:44.987 11:45:15 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:44.987 11:45:15 -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:44.987 11:45:15 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.987 11:45:15 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:44.987 11:45:15 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:44.987 11:45:15 -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:44.987 11:45:15 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.987 11:45:15 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:45.246 11:45:15 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:45.246 11:45:15 -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:45.246 11:45:15 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.246 11:45:15 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:45.504 11:45:16 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:45.504 11:45:16 -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:45.504 11:45:16 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.504 11:45:16 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:45.504 11:45:16 -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:45.504 11:45:16 -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:45.504 11:45:16 -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.504 11:45:16 -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:45.763 11:45:16 -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:45.763 11:45:16 -- host/multipath_status.sh@137 -- # killprocess 3097355 00:21:45.763 11:45:16 -- common/autotest_common.sh@946 -- # '[' -z 3097355 ']' 00:21:45.763 11:45:16 -- common/autotest_common.sh@950 -- # kill -0 3097355 00:21:45.763 11:45:16 -- common/autotest_common.sh@951 -- # uname 00:21:45.763 11:45:16 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:45.763 11:45:16 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3097355 00:21:45.763 11:45:16 -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:45.763 11:45:16 -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:45.763 11:45:16 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3097355' 00:21:45.763 killing process with pid 3097355 00:21:45.763 11:45:16 -- common/autotest_common.sh@965 -- # kill 3097355 00:21:45.763 11:45:16 -- common/autotest_common.sh@970 -- # wait 3097355 00:21:46.025 Connection closed with partial response: 00:21:46.025 00:21:46.025 00:21:46.025 11:45:16 -- host/multipath_status.sh@139 -- # wait 3097355 00:21:46.025 11:45:16 -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:46.025 [2024-05-15 11:44:47.932143] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:21:46.025 [2024-05-15 11:44:47.932208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097355 ] 00:21:46.025 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.025 [2024-05-15 11:44:47.999577] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.025 [2024-05-15 11:44:48.080263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.025 Running I/O for 90 seconds... 00:21:46.025 [2024-05-15 11:45:01.455531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x187000 00:21:46.025 [2024-05-15 11:45:01.455577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.025 [2024-05-15 11:45:01.455624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187000 00:21:46.025 [2024-05-15 11:45:01.455636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.025 [2024-05-15 11:45:01.455649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187000 00:21:46.025 [2024-05-15 11:45:01.455660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.025 [2024-05-15 11:45:01.455672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187000 00:21:46.025 [2024-05-15 11:45:01.455681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.025 [2024-05-15 11:45:01.455694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187000 00:21:46.025 [2024-05-15 11:45:01.455703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.025 [2024-05-15 11:45:01.455715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.455725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.455738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.455747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.455760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.455770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.455782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.455791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.455803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.455819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.455832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.455842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.455854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.455863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.455875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.455885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.455898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.455908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.455920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.455929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.455942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.455951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.455963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.455973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.455985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.455994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.456006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.456016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.456028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.456037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.456049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.456062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.456077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.456086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.456099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.456108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.456120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.456130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.456141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.456151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.456162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.456172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.456184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.456193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.456205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.456214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.456226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.456237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.456249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187000 00:21:46.026 [2024-05-15 11:45:01.456259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.456271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.026 [2024-05-15 11:45:01.456280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.456292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.026 [2024-05-15 11:45:01.456301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.456317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.026 [2024-05-15 11:45:01.456328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.456340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.026 [2024-05-15 11:45:01.456350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.026 [2024-05-15 11:45:01.456361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.027 [2024-05-15 11:45:01.456371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.027 [2024-05-15 11:45:01.456393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.027 [2024-05-15 11:45:01.456414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.027 [2024-05-15 11:45:01.456435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.027 [2024-05-15 11:45:01.456456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.027 [2024-05-15 11:45:01.456478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.027 [2024-05-15 11:45:01.456499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.027 [2024-05-15 11:45:01.456520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.027 [2024-05-15 11:45:01.456541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.027 [2024-05-15 11:45:01.456562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.027 [2024-05-15 11:45:01.456585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.027 [2024-05-15 11:45:01.456607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.027 [2024-05-15 11:45:01.456628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.027 [2024-05-15 11:45:01.456649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187000 00:21:46.027 [2024-05-15 11:45:01.456671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187000 00:21:46.027 [2024-05-15 11:45:01.456692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187000 00:21:46.027 [2024-05-15 11:45:01.456713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187000 00:21:46.027 [2024-05-15 11:45:01.456735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187000 00:21:46.027 [2024-05-15 11:45:01.456756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187000 00:21:46.027 [2024-05-15 11:45:01.456777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x187000 00:21:46.027 [2024-05-15 11:45:01.456799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x187000 00:21:46.027 [2024-05-15 11:45:01.456821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x187000 00:21:46.027 [2024-05-15 11:45:01.456844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187000 00:21:46.027 [2024-05-15 11:45:01.456865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x187000 00:21:46.027 [2024-05-15 11:45:01.456886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x187000 00:21:46.027 [2024-05-15 11:45:01.456908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x187000 00:21:46.027 [2024-05-15 11:45:01.456929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x187000 00:21:46.027 [2024-05-15 11:45:01.456951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.027 [2024-05-15 11:45:01.456963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x187000 00:21:46.027 [2024-05-15 11:45:01.456972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.456984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x187000 00:21:46.028 [2024-05-15 11:45:01.456993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x187000 00:21:46.028 [2024-05-15 11:45:01.457015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x187000 00:21:46.028 [2024-05-15 11:45:01.457036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187000 00:21:46.028 [2024-05-15 11:45:01.457062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187000 00:21:46.028 [2024-05-15 11:45:01.457085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187000 00:21:46.028 [2024-05-15 11:45:01.457106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x187000 00:21:46.028 [2024-05-15 11:45:01.457128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187000 00:21:46.028 [2024-05-15 11:45:01.457149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x187000 00:21:46.028 [2024-05-15 11:45:01.457171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x187000 00:21:46.028 [2024-05-15 11:45:01.457192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x187000 00:21:46.028 [2024-05-15 11:45:01.457213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x187000 00:21:46.028 [2024-05-15 11:45:01.457235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x187000 00:21:46.028 [2024-05-15 11:45:01.457256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187000 00:21:46.028 [2024-05-15 11:45:01.457277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.028 [2024-05-15 11:45:01.457299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.028 [2024-05-15 11:45:01.457320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.028 [2024-05-15 11:45:01.457342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.028 [2024-05-15 11:45:01.457363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.028 [2024-05-15 11:45:01.457384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.028 [2024-05-15 11:45:01.457405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.028 [2024-05-15 11:45:01.457427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.028 [2024-05-15 11:45:01.457447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.028 [2024-05-15 11:45:01.457938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.028 [2024-05-15 11:45:01.457967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.457983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.028 [2024-05-15 11:45:01.457993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.458009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.028 [2024-05-15 11:45:01.458019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.458035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.028 [2024-05-15 11:45:01.458044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.028 [2024-05-15 11:45:01.458066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187000 00:21:46.029 [2024-05-15 11:45:01.458709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187000 00:21:46.029 [2024-05-15 11:45:01.458735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x187000 00:21:46.029 [2024-05-15 11:45:01.458762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x187000 00:21:46.029 [2024-05-15 11:45:01.458788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x187000 00:21:46.029 [2024-05-15 11:45:01.458814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x187000 00:21:46.029 [2024-05-15 11:45:01.458841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x187000 00:21:46.029 [2024-05-15 11:45:01.458867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187000 00:21:46.029 [2024-05-15 11:45:01.458893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.458978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.458995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.459005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.459021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.459031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.459047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.459061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:46.029 [2024-05-15 11:45:01.459078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.029 [2024-05-15 11:45:01.459088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:01.459105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:01.459114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:01.459131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:01.459140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:01.459157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:01.459166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:01.459182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:01.459192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:01.459209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:01.459218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:01.459235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:01.459244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:01.459263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:01.459272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.295395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x187000 00:21:46.030 [2024-05-15 11:45:14.295440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.295478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x187000 00:21:46.030 [2024-05-15 11:45:14.295489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.295502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x187000 00:21:46.030 [2024-05-15 11:45:14.295512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.295524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x187000 00:21:46.030 [2024-05-15 11:45:14.295533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.295546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:14.295555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.295567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187000 00:21:46.030 [2024-05-15 11:45:14.295577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:14.296045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:14.296072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:14.296093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:14.296114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:14.296135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x187000 00:21:46.030 [2024-05-15 11:45:14.296162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:14.296183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:14.296204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x187000 00:21:46.030 [2024-05-15 11:45:14.296225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:14.296246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x187000 00:21:46.030 [2024-05-15 11:45:14.296268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x187000 00:21:46.030 [2024-05-15 11:45:14.296290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187000 00:21:46.030 [2024-05-15 11:45:14.296312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187000 00:21:46.030 [2024-05-15 11:45:14.296333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:14.296354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187000 00:21:46.030 [2024-05-15 11:45:14.296376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x187000 00:21:46.030 [2024-05-15 11:45:14.296399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187000 00:21:46.030 [2024-05-15 11:45:14.296420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:14.296441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:14.296462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:14.296483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x187000 00:21:46.030 [2024-05-15 11:45:14.296505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187000 00:21:46.030 [2024-05-15 11:45:14.296526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.030 [2024-05-15 11:45:14.296547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.030 [2024-05-15 11:45:14.296559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x187000 00:21:46.031 [2024-05-15 11:45:14.296568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.296580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.031 [2024-05-15 11:45:14.296589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.296601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x187000 00:21:46.031 [2024-05-15 11:45:14.296610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.296621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.031 [2024-05-15 11:45:14.296632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.296645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.031 [2024-05-15 11:45:14.296655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.296666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187000 00:21:46.031 [2024-05-15 11:45:14.296676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.296687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x187000 00:21:46.031 [2024-05-15 11:45:14.296697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.296708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x187000 00:21:46.031 [2024-05-15 11:45:14.296718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.296814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.031 [2024-05-15 11:45:14.296824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.296836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x187000 00:21:46.031 [2024-05-15 11:45:14.296846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.296857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.031 [2024-05-15 11:45:14.296866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.296878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187000 00:21:46.031 [2024-05-15 11:45:14.296888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.296899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.031 [2024-05-15 11:45:14.296909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.296920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.031 [2024-05-15 11:45:14.296929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.296941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.031 [2024-05-15 11:45:14.296950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.296962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.031 [2024-05-15 11:45:14.296971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.296985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.031 [2024-05-15 11:45:14.296994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.297007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187000 00:21:46.031 [2024-05-15 11:45:14.297016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.297028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x187000 00:21:46.031 [2024-05-15 11:45:14.297038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.297050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.031 [2024-05-15 11:45:14.297064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.297076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x187000 00:21:46.031 [2024-05-15 11:45:14.297085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.297097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.031 [2024-05-15 11:45:14.297106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.297118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187000 00:21:46.031 [2024-05-15 11:45:14.297128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.297140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187000 00:21:46.031 [2024-05-15 11:45:14.297149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.297161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.031 [2024-05-15 11:45:14.297170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.031 [2024-05-15 11:45:14.297182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187000 00:21:46.031 [2024-05-15 11:45:14.297191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.032 [2024-05-15 11:45:14.297204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x187000 00:21:46.032 [2024-05-15 11:45:14.297213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.032 [2024-05-15 11:45:14.297225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.032 [2024-05-15 11:45:14.297235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.032 [2024-05-15 11:45:14.297247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.032 [2024-05-15 11:45:14.297256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.032 [2024-05-15 11:45:14.297268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.032 [2024-05-15 11:45:14.297277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.032 [2024-05-15 11:45:14.297289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187000 00:21:46.032 [2024-05-15 11:45:14.297298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.032 [2024-05-15 11:45:14.297310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187000 00:21:46.032 [2024-05-15 11:45:14.297320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.032 [2024-05-15 11:45:14.297332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.032 [2024-05-15 11:45:14.297341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.032 [2024-05-15 11:45:14.297352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.032 [2024-05-15 11:45:14.297362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.032 Received shutdown signal, test time was about 26.871058 seconds 00:21:46.032 00:21:46.032 Latency(us) 00:21:46.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.032 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:46.032 Verification LBA range: start 0x0 length 0x4000 00:21:46.032 Nvme0n1 : 26.87 16060.93 62.74 0.00 0.00 7950.24 1517.30 3019898.88 00:21:46.032 =================================================================================================================== 00:21:46.032 Total : 16060.93 62.74 0.00 0.00 7950.24 1517.30 3019898.88 00:21:46.032 11:45:16 -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:46.291 11:45:16 -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:21:46.291 11:45:16 -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:46.291 11:45:16 -- host/multipath_status.sh@148 -- # nvmftestfini 00:21:46.291 11:45:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:46.291 11:45:16 -- nvmf/common.sh@117 -- # sync 00:21:46.291 11:45:16 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:46.291 11:45:16 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:46.291 11:45:16 -- nvmf/common.sh@120 -- # set +e 00:21:46.291 11:45:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:46.291 11:45:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:46.291 rmmod nvme_rdma 00:21:46.291 rmmod nvme_fabrics 00:21:46.291 11:45:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:46.291 11:45:16 -- nvmf/common.sh@124 -- # set -e 00:21:46.291 11:45:16 -- nvmf/common.sh@125 -- # return 0 00:21:46.291 11:45:16 -- nvmf/common.sh@478 -- # '[' -n 3096978 ']' 00:21:46.291 11:45:16 -- nvmf/common.sh@479 -- # killprocess 3096978 00:21:46.291 11:45:16 -- common/autotest_common.sh@946 -- # '[' -z 3096978 ']' 00:21:46.291 11:45:16 -- common/autotest_common.sh@950 -- # kill -0 3096978 00:21:46.291 11:45:16 -- common/autotest_common.sh@951 -- # uname 00:21:46.291 11:45:16 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:46.291 11:45:16 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3096978 00:21:46.291 11:45:17 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:46.291 11:45:17 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:46.291 11:45:17 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3096978' 00:21:46.291 killing process with pid 3096978 00:21:46.291 11:45:17 -- common/autotest_common.sh@965 -- # kill 3096978 00:21:46.291 [2024-05-15 11:45:17.001894] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:46.291 11:45:17 -- common/autotest_common.sh@970 -- # wait 3096978 00:21:46.291 [2024-05-15 11:45:17.054354] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:21:46.551 11:45:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:46.551 11:45:17 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:21:46.551 00:21:46.551 real 0m37.892s 00:21:46.551 user 1m47.579s 00:21:46.551 sys 0m9.198s 00:21:46.551 11:45:17 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:46.551 11:45:17 -- common/autotest_common.sh@10 -- # set +x 00:21:46.551 ************************************ 00:21:46.551 END TEST nvmf_host_multipath_status 00:21:46.551 ************************************ 00:21:46.811 11:45:17 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:21:46.811 11:45:17 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:46.811 11:45:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:46.811 11:45:17 -- common/autotest_common.sh@10 -- # set +x 00:21:46.811 ************************************ 00:21:46.811 START TEST nvmf_discovery_remove_ifc 00:21:46.811 ************************************ 00:21:46.811 11:45:17 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:21:46.811 * Looking for test storage... 00:21:46.811 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:46.811 11:45:17 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.811 11:45:17 -- nvmf/common.sh@7 -- # uname -s 00:21:46.811 11:45:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.811 11:45:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.811 11:45:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.811 11:45:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.811 11:45:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.811 11:45:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.811 11:45:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.811 11:45:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.811 11:45:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.811 11:45:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.811 11:45:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:21:46.811 11:45:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:21:46.811 11:45:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.811 11:45:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.811 11:45:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.811 11:45:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.811 11:45:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:46.811 11:45:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.811 11:45:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.811 11:45:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.811 11:45:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.811 11:45:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.811 11:45:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.811 11:45:17 -- paths/export.sh@5 -- # export PATH 00:21:46.811 11:45:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.811 11:45:17 -- nvmf/common.sh@47 -- # : 0 00:21:46.811 11:45:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.811 11:45:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.811 11:45:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.811 11:45:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.811 11:45:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.811 11:45:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.811 11:45:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.811 11:45:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.811 11:45:17 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:21:46.811 11:45:17 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:21:46.811 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:21:46.811 11:45:17 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:21:46.811 00:21:46.811 real 0m0.137s 00:21:46.811 user 0m0.072s 00:21:46.811 sys 0m0.074s 00:21:46.811 11:45:17 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:46.811 11:45:17 -- common/autotest_common.sh@10 -- # set +x 00:21:46.811 ************************************ 00:21:46.811 END TEST nvmf_discovery_remove_ifc 00:21:46.811 ************************************ 00:21:46.811 11:45:17 -- nvmf/nvmf.sh@102 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:21:46.811 11:45:17 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:46.811 11:45:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:46.811 11:45:17 -- common/autotest_common.sh@10 -- # set +x 00:21:46.811 ************************************ 00:21:46.811 START TEST nvmf_identify_kernel_target 00:21:46.811 ************************************ 00:21:46.811 11:45:17 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:21:47.071 * Looking for test storage... 00:21:47.071 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:47.071 11:45:17 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.071 11:45:17 -- nvmf/common.sh@7 -- # uname -s 00:21:47.071 11:45:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.071 11:45:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.071 11:45:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.071 11:45:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.071 11:45:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.071 11:45:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.071 11:45:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.071 11:45:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.071 11:45:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.071 11:45:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.071 11:45:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:21:47.071 11:45:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:21:47.071 11:45:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.071 11:45:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.071 11:45:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.071 11:45:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.071 11:45:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:47.071 11:45:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.071 11:45:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.071 11:45:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.071 11:45:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.071 11:45:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.071 11:45:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.071 11:45:17 -- paths/export.sh@5 -- # export PATH 00:21:47.071 11:45:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.071 11:45:17 -- nvmf/common.sh@47 -- # : 0 00:21:47.071 11:45:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.071 11:45:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.071 11:45:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.071 11:45:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.071 11:45:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.071 11:45:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.071 11:45:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.071 11:45:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.071 11:45:17 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:47.071 11:45:17 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:21:47.071 11:45:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.071 11:45:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:47.071 11:45:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:47.071 11:45:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:47.071 11:45:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.071 11:45:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.071 11:45:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.071 11:45:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:47.071 11:45:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:47.071 11:45:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:47.071 11:45:17 -- common/autotest_common.sh@10 -- # set +x 00:21:52.346 11:45:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:52.346 11:45:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:52.346 11:45:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:52.346 11:45:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:52.346 11:45:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:52.346 11:45:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:52.346 11:45:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:52.346 11:45:23 -- nvmf/common.sh@295 -- # net_devs=() 00:21:52.346 11:45:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:52.346 11:45:23 -- nvmf/common.sh@296 -- # e810=() 00:21:52.346 11:45:23 -- nvmf/common.sh@296 -- # local -ga e810 00:21:52.346 11:45:23 -- nvmf/common.sh@297 -- # x722=() 00:21:52.346 11:45:23 -- nvmf/common.sh@297 -- # local -ga x722 00:21:52.346 11:45:23 -- nvmf/common.sh@298 -- # mlx=() 00:21:52.346 11:45:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:52.346 11:45:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.346 11:45:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.346 11:45:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.346 11:45:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.346 11:45:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.346 11:45:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.346 11:45:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.346 11:45:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.346 11:45:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.346 11:45:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.346 11:45:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.346 11:45:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:52.346 11:45:23 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:52.346 11:45:23 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:52.346 11:45:23 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:52.346 11:45:23 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:52.346 11:45:23 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:52.346 11:45:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:52.346 11:45:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.346 11:45:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:52.346 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:52.346 11:45:23 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:52.346 11:45:23 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:52.346 11:45:23 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:52.346 11:45:23 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:52.346 11:45:23 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:52.346 11:45:23 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:52.346 11:45:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.346 11:45:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:52.346 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:52.346 11:45:23 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:52.346 11:45:23 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:52.346 11:45:23 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:52.346 11:45:23 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:52.346 11:45:23 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:52.346 11:45:23 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:52.346 11:45:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:52.346 11:45:23 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:52.346 11:45:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.346 11:45:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.346 11:45:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:52.346 11:45:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.346 11:45:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:52.346 Found net devices under 0000:18:00.0: mlx_0_0 00:21:52.346 11:45:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.346 11:45:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.346 11:45:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.346 11:45:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:52.614 11:45:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.614 11:45:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:52.614 Found net devices under 0000:18:00.1: mlx_0_1 00:21:52.614 11:45:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.614 11:45:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:52.614 11:45:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:52.614 11:45:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:52.614 11:45:23 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:21:52.614 11:45:23 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:21:52.614 11:45:23 -- nvmf/common.sh@409 -- # rdma_device_init 00:21:52.614 11:45:23 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:21:52.614 11:45:23 -- nvmf/common.sh@58 -- # uname 00:21:52.614 11:45:23 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:52.614 11:45:23 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:52.614 11:45:23 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:52.614 11:45:23 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:52.614 11:45:23 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:52.614 11:45:23 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:52.614 11:45:23 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:52.614 11:45:23 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:52.614 11:45:23 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:21:52.614 11:45:23 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:52.614 11:45:23 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:52.614 11:45:23 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:52.614 11:45:23 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:52.614 11:45:23 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:52.614 11:45:23 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:52.614 11:45:23 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:52.614 11:45:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:52.614 11:45:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:52.614 11:45:23 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:52.614 11:45:23 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:52.614 11:45:23 -- nvmf/common.sh@105 -- # continue 2 00:21:52.614 11:45:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:52.614 11:45:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:52.614 11:45:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:52.614 11:45:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:52.614 11:45:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:52.614 11:45:23 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:52.614 11:45:23 -- nvmf/common.sh@105 -- # continue 2 00:21:52.614 11:45:23 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:52.614 11:45:23 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:52.614 11:45:23 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:52.614 11:45:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:52.614 11:45:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:52.614 11:45:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:52.614 11:45:23 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:52.614 11:45:23 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:52.614 11:45:23 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:52.614 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:52.614 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:21:52.614 altname enp24s0f0np0 00:21:52.614 altname ens785f0np0 00:21:52.614 inet 192.168.100.8/24 scope global mlx_0_0 00:21:52.614 valid_lft forever preferred_lft forever 00:21:52.614 11:45:23 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:52.614 11:45:23 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:52.614 11:45:23 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:52.614 11:45:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:52.614 11:45:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:52.614 11:45:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:52.614 11:45:23 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:52.614 11:45:23 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:52.614 11:45:23 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:52.614 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:52.614 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:21:52.614 altname enp24s0f1np1 00:21:52.614 altname ens785f1np1 00:21:52.614 inet 192.168.100.9/24 scope global mlx_0_1 00:21:52.614 valid_lft forever preferred_lft forever 00:21:52.614 11:45:23 -- nvmf/common.sh@411 -- # return 0 00:21:52.614 11:45:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:52.614 11:45:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:52.614 11:45:23 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:21:52.614 11:45:23 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:21:52.614 11:45:23 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:52.614 11:45:23 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:52.614 11:45:23 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:52.614 11:45:23 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:52.614 11:45:23 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:52.614 11:45:23 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:52.614 11:45:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:52.614 11:45:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:52.614 11:45:23 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:52.614 11:45:23 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:52.614 11:45:23 -- nvmf/common.sh@105 -- # continue 2 00:21:52.614 11:45:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:52.614 11:45:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:52.614 11:45:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:52.614 11:45:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:52.614 11:45:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:52.614 11:45:23 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:52.614 11:45:23 -- nvmf/common.sh@105 -- # continue 2 00:21:52.614 11:45:23 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:52.614 11:45:23 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:52.614 11:45:23 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:52.614 11:45:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:52.614 11:45:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:52.614 11:45:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:52.614 11:45:23 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:52.614 11:45:23 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:52.614 11:45:23 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:52.614 11:45:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:52.614 11:45:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:52.614 11:45:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:52.615 11:45:23 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:21:52.615 192.168.100.9' 00:21:52.615 11:45:23 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:52.615 192.168.100.9' 00:21:52.615 11:45:23 -- nvmf/common.sh@446 -- # head -n 1 00:21:52.615 11:45:23 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:52.615 11:45:23 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:21:52.615 192.168.100.9' 00:21:52.615 11:45:23 -- nvmf/common.sh@447 -- # head -n 1 00:21:52.615 11:45:23 -- nvmf/common.sh@447 -- # tail -n +2 00:21:52.615 11:45:23 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:52.615 11:45:23 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:21:52.615 11:45:23 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:52.615 11:45:23 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:21:52.615 11:45:23 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:21:52.615 11:45:23 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:21:52.615 11:45:23 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:21:52.615 11:45:23 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:21:52.615 11:45:23 -- nvmf/common.sh@717 -- # local ip 00:21:52.615 11:45:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:52.615 11:45:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:52.615 11:45:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:52.615 11:45:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:52.615 11:45:23 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:52.615 11:45:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:52.615 11:45:23 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:52.615 11:45:23 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:52.615 11:45:23 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:52.615 11:45:23 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:21:52.615 11:45:23 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:21:52.615 11:45:23 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:21:52.615 11:45:23 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:21:52.615 11:45:23 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:52.615 11:45:23 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:52.615 11:45:23 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:52.615 11:45:23 -- nvmf/common.sh@628 -- # local block nvme 00:21:52.615 11:45:23 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:21:52.615 11:45:23 -- nvmf/common.sh@631 -- # modprobe nvmet 00:21:52.615 11:45:23 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:52.615 11:45:23 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:21:55.150 Waiting for block devices as requested 00:21:55.408 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:21:55.408 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:55.408 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:55.667 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:55.667 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:55.667 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:55.925 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:55.925 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:55.925 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:55.925 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:56.184 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:56.184 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:56.184 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:56.445 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:56.445 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:56.445 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:56.704 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:56.704 11:45:27 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:21:56.704 11:45:27 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:56.704 11:45:27 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:21:56.704 11:45:27 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:21:56.704 11:45:27 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:56.704 11:45:27 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:21:56.704 11:45:27 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:21:56.704 11:45:27 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:56.704 11:45:27 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:21:56.704 No valid GPT data, bailing 00:21:56.704 11:45:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:56.704 11:45:27 -- scripts/common.sh@391 -- # pt= 00:21:56.704 11:45:27 -- scripts/common.sh@392 -- # return 1 00:21:56.704 11:45:27 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:21:56.704 11:45:27 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:21:56.704 11:45:27 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:56.704 11:45:27 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:56.704 11:45:27 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:56.964 11:45:27 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:56.964 11:45:27 -- nvmf/common.sh@656 -- # echo 1 00:21:56.964 11:45:27 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:21:56.964 11:45:27 -- nvmf/common.sh@658 -- # echo 1 00:21:56.964 11:45:27 -- nvmf/common.sh@660 -- # echo 192.168.100.8 00:21:56.964 11:45:27 -- nvmf/common.sh@661 -- # echo rdma 00:21:56.964 11:45:27 -- nvmf/common.sh@662 -- # echo 4420 00:21:56.964 11:45:27 -- nvmf/common.sh@663 -- # echo ipv4 00:21:56.964 11:45:27 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:56.964 11:45:27 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:21:56.964 00:21:56.964 Discovery Log Number of Records 2, Generation counter 2 00:21:56.964 =====Discovery Log Entry 0====== 00:21:56.964 trtype: rdma 00:21:56.964 adrfam: ipv4 00:21:56.964 subtype: current discovery subsystem 00:21:56.964 treq: not specified, sq flow control disable supported 00:21:56.964 portid: 1 00:21:56.964 trsvcid: 4420 00:21:56.964 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:56.964 traddr: 192.168.100.8 00:21:56.964 eflags: none 00:21:56.964 rdma_prtype: not specified 00:21:56.964 rdma_qptype: connected 00:21:56.964 rdma_cms: rdma-cm 00:21:56.964 rdma_pkey: 0x0000 00:21:56.964 =====Discovery Log Entry 1====== 00:21:56.964 trtype: rdma 00:21:56.964 adrfam: ipv4 00:21:56.964 subtype: nvme subsystem 00:21:56.964 treq: not specified, sq flow control disable supported 00:21:56.964 portid: 1 00:21:56.964 trsvcid: 4420 00:21:56.964 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:56.964 traddr: 192.168.100.8 00:21:56.964 eflags: none 00:21:56.964 rdma_prtype: not specified 00:21:56.964 rdma_qptype: connected 00:21:56.964 rdma_cms: rdma-cm 00:21:56.964 rdma_pkey: 0x0000 00:21:56.964 11:45:27 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:21:56.964 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:21:56.964 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.225 ===================================================== 00:21:57.225 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:57.225 ===================================================== 00:21:57.225 Controller Capabilities/Features 00:21:57.225 ================================ 00:21:57.225 Vendor ID: 0000 00:21:57.225 Subsystem Vendor ID: 0000 00:21:57.225 Serial Number: ac61ee0ff2094c545f56 00:21:57.225 Model Number: Linux 00:21:57.225 Firmware Version: 6.7.0-68 00:21:57.225 Recommended Arb Burst: 0 00:21:57.225 IEEE OUI Identifier: 00 00 00 00:21:57.225 Multi-path I/O 00:21:57.225 May have multiple subsystem ports: No 00:21:57.225 May have multiple controllers: No 00:21:57.225 Associated with SR-IOV VF: No 00:21:57.225 Max Data Transfer Size: Unlimited 00:21:57.225 Max Number of Namespaces: 0 00:21:57.225 Max Number of I/O Queues: 1024 00:21:57.225 NVMe Specification Version (VS): 1.3 00:21:57.225 NVMe Specification Version (Identify): 1.3 00:21:57.225 Maximum Queue Entries: 128 00:21:57.225 Contiguous Queues Required: No 00:21:57.225 Arbitration Mechanisms Supported 00:21:57.225 Weighted Round Robin: Not Supported 00:21:57.225 Vendor Specific: Not Supported 00:21:57.225 Reset Timeout: 7500 ms 00:21:57.225 Doorbell Stride: 4 bytes 00:21:57.225 NVM Subsystem Reset: Not Supported 00:21:57.225 Command Sets Supported 00:21:57.225 NVM Command Set: Supported 00:21:57.225 Boot Partition: Not Supported 00:21:57.225 Memory Page Size Minimum: 4096 bytes 00:21:57.225 Memory Page Size Maximum: 4096 bytes 00:21:57.225 Persistent Memory Region: Not Supported 00:21:57.225 Optional Asynchronous Events Supported 00:21:57.225 Namespace Attribute Notices: Not Supported 00:21:57.225 Firmware Activation Notices: Not Supported 00:21:57.225 ANA Change Notices: Not Supported 00:21:57.225 PLE Aggregate Log Change Notices: Not Supported 00:21:57.225 LBA Status Info Alert Notices: Not Supported 00:21:57.225 EGE Aggregate Log Change Notices: Not Supported 00:21:57.225 Normal NVM Subsystem Shutdown event: Not Supported 00:21:57.225 Zone Descriptor Change Notices: Not Supported 00:21:57.225 Discovery Log Change Notices: Supported 00:21:57.225 Controller Attributes 00:21:57.225 128-bit Host Identifier: Not Supported 00:21:57.225 Non-Operational Permissive Mode: Not Supported 00:21:57.225 NVM Sets: Not Supported 00:21:57.225 Read Recovery Levels: Not Supported 00:21:57.225 Endurance Groups: Not Supported 00:21:57.225 Predictable Latency Mode: Not Supported 00:21:57.225 Traffic Based Keep ALive: Not Supported 00:21:57.225 Namespace Granularity: Not Supported 00:21:57.225 SQ Associations: Not Supported 00:21:57.225 UUID List: Not Supported 00:21:57.225 Multi-Domain Subsystem: Not Supported 00:21:57.225 Fixed Capacity Management: Not Supported 00:21:57.225 Variable Capacity Management: Not Supported 00:21:57.225 Delete Endurance Group: Not Supported 00:21:57.225 Delete NVM Set: Not Supported 00:21:57.225 Extended LBA Formats Supported: Not Supported 00:21:57.225 Flexible Data Placement Supported: Not Supported 00:21:57.225 00:21:57.225 Controller Memory Buffer Support 00:21:57.225 ================================ 00:21:57.225 Supported: No 00:21:57.225 00:21:57.225 Persistent Memory Region Support 00:21:57.225 ================================ 00:21:57.225 Supported: No 00:21:57.225 00:21:57.225 Admin Command Set Attributes 00:21:57.225 ============================ 00:21:57.225 Security Send/Receive: Not Supported 00:21:57.225 Format NVM: Not Supported 00:21:57.225 Firmware Activate/Download: Not Supported 00:21:57.225 Namespace Management: Not Supported 00:21:57.225 Device Self-Test: Not Supported 00:21:57.225 Directives: Not Supported 00:21:57.225 NVMe-MI: Not Supported 00:21:57.225 Virtualization Management: Not Supported 00:21:57.225 Doorbell Buffer Config: Not Supported 00:21:57.225 Get LBA Status Capability: Not Supported 00:21:57.225 Command & Feature Lockdown Capability: Not Supported 00:21:57.225 Abort Command Limit: 1 00:21:57.225 Async Event Request Limit: 1 00:21:57.225 Number of Firmware Slots: N/A 00:21:57.225 Firmware Slot 1 Read-Only: N/A 00:21:57.225 Firmware Activation Without Reset: N/A 00:21:57.225 Multiple Update Detection Support: N/A 00:21:57.225 Firmware Update Granularity: No Information Provided 00:21:57.225 Per-Namespace SMART Log: No 00:21:57.225 Asymmetric Namespace Access Log Page: Not Supported 00:21:57.225 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:57.225 Command Effects Log Page: Not Supported 00:21:57.225 Get Log Page Extended Data: Supported 00:21:57.225 Telemetry Log Pages: Not Supported 00:21:57.225 Persistent Event Log Pages: Not Supported 00:21:57.225 Supported Log Pages Log Page: May Support 00:21:57.225 Commands Supported & Effects Log Page: Not Supported 00:21:57.225 Feature Identifiers & Effects Log Page:May Support 00:21:57.225 NVMe-MI Commands & Effects Log Page: May Support 00:21:57.225 Data Area 4 for Telemetry Log: Not Supported 00:21:57.225 Error Log Page Entries Supported: 1 00:21:57.225 Keep Alive: Not Supported 00:21:57.225 00:21:57.225 NVM Command Set Attributes 00:21:57.225 ========================== 00:21:57.225 Submission Queue Entry Size 00:21:57.225 Max: 1 00:21:57.225 Min: 1 00:21:57.225 Completion Queue Entry Size 00:21:57.225 Max: 1 00:21:57.225 Min: 1 00:21:57.225 Number of Namespaces: 0 00:21:57.225 Compare Command: Not Supported 00:21:57.225 Write Uncorrectable Command: Not Supported 00:21:57.225 Dataset Management Command: Not Supported 00:21:57.225 Write Zeroes Command: Not Supported 00:21:57.225 Set Features Save Field: Not Supported 00:21:57.225 Reservations: Not Supported 00:21:57.225 Timestamp: Not Supported 00:21:57.225 Copy: Not Supported 00:21:57.225 Volatile Write Cache: Not Present 00:21:57.225 Atomic Write Unit (Normal): 1 00:21:57.225 Atomic Write Unit (PFail): 1 00:21:57.225 Atomic Compare & Write Unit: 1 00:21:57.225 Fused Compare & Write: Not Supported 00:21:57.225 Scatter-Gather List 00:21:57.225 SGL Command Set: Supported 00:21:57.225 SGL Keyed: Supported 00:21:57.225 SGL Bit Bucket Descriptor: Not Supported 00:21:57.225 SGL Metadata Pointer: Not Supported 00:21:57.225 Oversized SGL: Not Supported 00:21:57.225 SGL Metadata Address: Not Supported 00:21:57.225 SGL Offset: Supported 00:21:57.225 Transport SGL Data Block: Not Supported 00:21:57.225 Replay Protected Memory Block: Not Supported 00:21:57.225 00:21:57.225 Firmware Slot Information 00:21:57.225 ========================= 00:21:57.225 Active slot: 0 00:21:57.225 00:21:57.225 00:21:57.226 Error Log 00:21:57.226 ========= 00:21:57.226 00:21:57.226 Active Namespaces 00:21:57.226 ================= 00:21:57.226 Discovery Log Page 00:21:57.226 ================== 00:21:57.226 Generation Counter: 2 00:21:57.226 Number of Records: 2 00:21:57.226 Record Format: 0 00:21:57.226 00:21:57.226 Discovery Log Entry 0 00:21:57.226 ---------------------- 00:21:57.226 Transport Type: 1 (RDMA) 00:21:57.226 Address Family: 1 (IPv4) 00:21:57.226 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:57.226 Entry Flags: 00:21:57.226 Duplicate Returned Information: 0 00:21:57.226 Explicit Persistent Connection Support for Discovery: 0 00:21:57.226 Transport Requirements: 00:21:57.226 Secure Channel: Not Specified 00:21:57.226 Port ID: 1 (0x0001) 00:21:57.226 Controller ID: 65535 (0xffff) 00:21:57.226 Admin Max SQ Size: 32 00:21:57.226 Transport Service Identifier: 4420 00:21:57.226 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:57.226 Transport Address: 192.168.100.8 00:21:57.226 Transport Specific Address Subtype - RDMA 00:21:57.226 RDMA QP Service Type: 1 (Reliable Connected) 00:21:57.226 RDMA Provider Type: 1 (No provider specified) 00:21:57.226 RDMA CM Service: 1 (RDMA_CM) 00:21:57.226 Discovery Log Entry 1 00:21:57.226 ---------------------- 00:21:57.226 Transport Type: 1 (RDMA) 00:21:57.226 Address Family: 1 (IPv4) 00:21:57.226 Subsystem Type: 2 (NVM Subsystem) 00:21:57.226 Entry Flags: 00:21:57.226 Duplicate Returned Information: 0 00:21:57.226 Explicit Persistent Connection Support for Discovery: 0 00:21:57.226 Transport Requirements: 00:21:57.226 Secure Channel: Not Specified 00:21:57.226 Port ID: 1 (0x0001) 00:21:57.226 Controller ID: 65535 (0xffff) 00:21:57.226 Admin Max SQ Size: 32 00:21:57.226 Transport Service Identifier: 4420 00:21:57.226 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:21:57.226 Transport Address: 192.168.100.8 00:21:57.226 Transport Specific Address Subtype - RDMA 00:21:57.226 RDMA QP Service Type: 1 (Reliable Connected) 00:21:57.226 RDMA Provider Type: 1 (No provider specified) 00:21:57.226 RDMA CM Service: 1 (RDMA_CM) 00:21:57.226 11:45:27 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:57.226 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.226 get_feature(0x01) failed 00:21:57.226 get_feature(0x02) failed 00:21:57.226 get_feature(0x04) failed 00:21:57.226 ===================================================== 00:21:57.226 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:21:57.226 ===================================================== 00:21:57.226 Controller Capabilities/Features 00:21:57.226 ================================ 00:21:57.226 Vendor ID: 0000 00:21:57.226 Subsystem Vendor ID: 0000 00:21:57.226 Serial Number: 1f981dccadd33b9673b7 00:21:57.226 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:21:57.226 Firmware Version: 6.7.0-68 00:21:57.226 Recommended Arb Burst: 6 00:21:57.226 IEEE OUI Identifier: 00 00 00 00:21:57.226 Multi-path I/O 00:21:57.226 May have multiple subsystem ports: Yes 00:21:57.226 May have multiple controllers: Yes 00:21:57.226 Associated with SR-IOV VF: No 00:21:57.226 Max Data Transfer Size: 1048576 00:21:57.226 Max Number of Namespaces: 1024 00:21:57.226 Max Number of I/O Queues: 128 00:21:57.226 NVMe Specification Version (VS): 1.3 00:21:57.226 NVMe Specification Version (Identify): 1.3 00:21:57.226 Maximum Queue Entries: 128 00:21:57.226 Contiguous Queues Required: No 00:21:57.226 Arbitration Mechanisms Supported 00:21:57.226 Weighted Round Robin: Not Supported 00:21:57.226 Vendor Specific: Not Supported 00:21:57.226 Reset Timeout: 7500 ms 00:21:57.226 Doorbell Stride: 4 bytes 00:21:57.226 NVM Subsystem Reset: Not Supported 00:21:57.226 Command Sets Supported 00:21:57.226 NVM Command Set: Supported 00:21:57.226 Boot Partition: Not Supported 00:21:57.226 Memory Page Size Minimum: 4096 bytes 00:21:57.226 Memory Page Size Maximum: 4096 bytes 00:21:57.226 Persistent Memory Region: Not Supported 00:21:57.226 Optional Asynchronous Events Supported 00:21:57.226 Namespace Attribute Notices: Supported 00:21:57.226 Firmware Activation Notices: Not Supported 00:21:57.226 ANA Change Notices: Supported 00:21:57.226 PLE Aggregate Log Change Notices: Not Supported 00:21:57.226 LBA Status Info Alert Notices: Not Supported 00:21:57.226 EGE Aggregate Log Change Notices: Not Supported 00:21:57.226 Normal NVM Subsystem Shutdown event: Not Supported 00:21:57.226 Zone Descriptor Change Notices: Not Supported 00:21:57.226 Discovery Log Change Notices: Not Supported 00:21:57.226 Controller Attributes 00:21:57.226 128-bit Host Identifier: Supported 00:21:57.226 Non-Operational Permissive Mode: Not Supported 00:21:57.226 NVM Sets: Not Supported 00:21:57.226 Read Recovery Levels: Not Supported 00:21:57.226 Endurance Groups: Not Supported 00:21:57.226 Predictable Latency Mode: Not Supported 00:21:57.226 Traffic Based Keep ALive: Supported 00:21:57.226 Namespace Granularity: Not Supported 00:21:57.226 SQ Associations: Not Supported 00:21:57.226 UUID List: Not Supported 00:21:57.226 Multi-Domain Subsystem: Not Supported 00:21:57.226 Fixed Capacity Management: Not Supported 00:21:57.226 Variable Capacity Management: Not Supported 00:21:57.226 Delete Endurance Group: Not Supported 00:21:57.226 Delete NVM Set: Not Supported 00:21:57.226 Extended LBA Formats Supported: Not Supported 00:21:57.226 Flexible Data Placement Supported: Not Supported 00:21:57.226 00:21:57.226 Controller Memory Buffer Support 00:21:57.226 ================================ 00:21:57.226 Supported: No 00:21:57.226 00:21:57.226 Persistent Memory Region Support 00:21:57.226 ================================ 00:21:57.226 Supported: No 00:21:57.226 00:21:57.226 Admin Command Set Attributes 00:21:57.226 ============================ 00:21:57.226 Security Send/Receive: Not Supported 00:21:57.226 Format NVM: Not Supported 00:21:57.226 Firmware Activate/Download: Not Supported 00:21:57.226 Namespace Management: Not Supported 00:21:57.226 Device Self-Test: Not Supported 00:21:57.226 Directives: Not Supported 00:21:57.226 NVMe-MI: Not Supported 00:21:57.226 Virtualization Management: Not Supported 00:21:57.226 Doorbell Buffer Config: Not Supported 00:21:57.226 Get LBA Status Capability: Not Supported 00:21:57.226 Command & Feature Lockdown Capability: Not Supported 00:21:57.226 Abort Command Limit: 4 00:21:57.226 Async Event Request Limit: 4 00:21:57.226 Number of Firmware Slots: N/A 00:21:57.226 Firmware Slot 1 Read-Only: N/A 00:21:57.226 Firmware Activation Without Reset: N/A 00:21:57.226 Multiple Update Detection Support: N/A 00:21:57.226 Firmware Update Granularity: No Information Provided 00:21:57.226 Per-Namespace SMART Log: Yes 00:21:57.226 Asymmetric Namespace Access Log Page: Supported 00:21:57.226 ANA Transition Time : 10 sec 00:21:57.226 00:21:57.226 Asymmetric Namespace Access Capabilities 00:21:57.226 ANA Optimized State : Supported 00:21:57.226 ANA Non-Optimized State : Supported 00:21:57.226 ANA Inaccessible State : Supported 00:21:57.226 ANA Persistent Loss State : Supported 00:21:57.226 ANA Change State : Supported 00:21:57.226 ANAGRPID is not changed : No 00:21:57.226 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:21:57.226 00:21:57.226 ANA Group Identifier Maximum : 128 00:21:57.226 Number of ANA Group Identifiers : 128 00:21:57.226 Max Number of Allowed Namespaces : 1024 00:21:57.226 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:21:57.226 Command Effects Log Page: Supported 00:21:57.226 Get Log Page Extended Data: Supported 00:21:57.226 Telemetry Log Pages: Not Supported 00:21:57.226 Persistent Event Log Pages: Not Supported 00:21:57.226 Supported Log Pages Log Page: May Support 00:21:57.226 Commands Supported & Effects Log Page: Not Supported 00:21:57.226 Feature Identifiers & Effects Log Page:May Support 00:21:57.226 NVMe-MI Commands & Effects Log Page: May Support 00:21:57.226 Data Area 4 for Telemetry Log: Not Supported 00:21:57.226 Error Log Page Entries Supported: 128 00:21:57.226 Keep Alive: Supported 00:21:57.226 Keep Alive Granularity: 1000 ms 00:21:57.226 00:21:57.226 NVM Command Set Attributes 00:21:57.226 ========================== 00:21:57.226 Submission Queue Entry Size 00:21:57.226 Max: 64 00:21:57.226 Min: 64 00:21:57.226 Completion Queue Entry Size 00:21:57.226 Max: 16 00:21:57.226 Min: 16 00:21:57.226 Number of Namespaces: 1024 00:21:57.226 Compare Command: Not Supported 00:21:57.226 Write Uncorrectable Command: Not Supported 00:21:57.226 Dataset Management Command: Supported 00:21:57.226 Write Zeroes Command: Supported 00:21:57.226 Set Features Save Field: Not Supported 00:21:57.226 Reservations: Not Supported 00:21:57.226 Timestamp: Not Supported 00:21:57.226 Copy: Not Supported 00:21:57.226 Volatile Write Cache: Present 00:21:57.226 Atomic Write Unit (Normal): 1 00:21:57.226 Atomic Write Unit (PFail): 1 00:21:57.226 Atomic Compare & Write Unit: 1 00:21:57.227 Fused Compare & Write: Not Supported 00:21:57.227 Scatter-Gather List 00:21:57.227 SGL Command Set: Supported 00:21:57.227 SGL Keyed: Supported 00:21:57.227 SGL Bit Bucket Descriptor: Not Supported 00:21:57.227 SGL Metadata Pointer: Not Supported 00:21:57.227 Oversized SGL: Not Supported 00:21:57.227 SGL Metadata Address: Not Supported 00:21:57.227 SGL Offset: Supported 00:21:57.227 Transport SGL Data Block: Not Supported 00:21:57.227 Replay Protected Memory Block: Not Supported 00:21:57.227 00:21:57.227 Firmware Slot Information 00:21:57.227 ========================= 00:21:57.227 Active slot: 0 00:21:57.227 00:21:57.227 Asymmetric Namespace Access 00:21:57.227 =========================== 00:21:57.227 Change Count : 0 00:21:57.227 Number of ANA Group Descriptors : 1 00:21:57.227 ANA Group Descriptor : 0 00:21:57.227 ANA Group ID : 1 00:21:57.227 Number of NSID Values : 1 00:21:57.227 Change Count : 0 00:21:57.227 ANA State : 1 00:21:57.227 Namespace Identifier : 1 00:21:57.227 00:21:57.227 Commands Supported and Effects 00:21:57.227 ============================== 00:21:57.227 Admin Commands 00:21:57.227 -------------- 00:21:57.227 Get Log Page (02h): Supported 00:21:57.227 Identify (06h): Supported 00:21:57.227 Abort (08h): Supported 00:21:57.227 Set Features (09h): Supported 00:21:57.227 Get Features (0Ah): Supported 00:21:57.227 Asynchronous Event Request (0Ch): Supported 00:21:57.227 Keep Alive (18h): Supported 00:21:57.227 I/O Commands 00:21:57.227 ------------ 00:21:57.227 Flush (00h): Supported 00:21:57.227 Write (01h): Supported LBA-Change 00:21:57.227 Read (02h): Supported 00:21:57.227 Write Zeroes (08h): Supported LBA-Change 00:21:57.227 Dataset Management (09h): Supported 00:21:57.227 00:21:57.227 Error Log 00:21:57.227 ========= 00:21:57.227 Entry: 0 00:21:57.227 Error Count: 0x3 00:21:57.227 Submission Queue Id: 0x0 00:21:57.227 Command Id: 0x5 00:21:57.227 Phase Bit: 0 00:21:57.227 Status Code: 0x2 00:21:57.227 Status Code Type: 0x0 00:21:57.227 Do Not Retry: 1 00:21:57.227 Error Location: 0x28 00:21:57.227 LBA: 0x0 00:21:57.227 Namespace: 0x0 00:21:57.227 Vendor Log Page: 0x0 00:21:57.227 ----------- 00:21:57.227 Entry: 1 00:21:57.227 Error Count: 0x2 00:21:57.227 Submission Queue Id: 0x0 00:21:57.227 Command Id: 0x5 00:21:57.227 Phase Bit: 0 00:21:57.227 Status Code: 0x2 00:21:57.227 Status Code Type: 0x0 00:21:57.227 Do Not Retry: 1 00:21:57.227 Error Location: 0x28 00:21:57.227 LBA: 0x0 00:21:57.227 Namespace: 0x0 00:21:57.227 Vendor Log Page: 0x0 00:21:57.227 ----------- 00:21:57.227 Entry: 2 00:21:57.227 Error Count: 0x1 00:21:57.227 Submission Queue Id: 0x0 00:21:57.227 Command Id: 0x0 00:21:57.227 Phase Bit: 0 00:21:57.227 Status Code: 0x2 00:21:57.227 Status Code Type: 0x0 00:21:57.227 Do Not Retry: 1 00:21:57.227 Error Location: 0x28 00:21:57.227 LBA: 0x0 00:21:57.227 Namespace: 0x0 00:21:57.227 Vendor Log Page: 0x0 00:21:57.227 00:21:57.227 Number of Queues 00:21:57.227 ================ 00:21:57.227 Number of I/O Submission Queues: 128 00:21:57.227 Number of I/O Completion Queues: 128 00:21:57.227 00:21:57.227 ZNS Specific Controller Data 00:21:57.227 ============================ 00:21:57.227 Zone Append Size Limit: 0 00:21:57.227 00:21:57.227 00:21:57.227 Active Namespaces 00:21:57.227 ================= 00:21:57.227 get_feature(0x05) failed 00:21:57.227 Namespace ID:1 00:21:57.227 Command Set Identifier: NVM (00h) 00:21:57.227 Deallocate: Supported 00:21:57.227 Deallocated/Unwritten Error: Not Supported 00:21:57.227 Deallocated Read Value: Unknown 00:21:57.227 Deallocate in Write Zeroes: Not Supported 00:21:57.227 Deallocated Guard Field: 0xFFFF 00:21:57.227 Flush: Supported 00:21:57.227 Reservation: Not Supported 00:21:57.227 Namespace Sharing Capabilities: Multiple Controllers 00:21:57.227 Size (in LBAs): 15628053168 (7452GiB) 00:21:57.227 Capacity (in LBAs): 15628053168 (7452GiB) 00:21:57.227 Utilization (in LBAs): 15628053168 (7452GiB) 00:21:57.227 UUID: e7e58b06-4bfe-4239-abd6-432d2455ec57 00:21:57.227 Thin Provisioning: Not Supported 00:21:57.227 Per-NS Atomic Units: Yes 00:21:57.227 Atomic Boundary Size (Normal): 0 00:21:57.227 Atomic Boundary Size (PFail): 0 00:21:57.227 Atomic Boundary Offset: 0 00:21:57.227 NGUID/EUI64 Never Reused: No 00:21:57.227 ANA group ID: 1 00:21:57.227 Namespace Write Protected: No 00:21:57.227 Number of LBA Formats: 1 00:21:57.227 Current LBA Format: LBA Format #00 00:21:57.227 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:57.227 00:21:57.227 11:45:27 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:21:57.227 11:45:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:57.227 11:45:27 -- nvmf/common.sh@117 -- # sync 00:21:57.227 11:45:27 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:57.227 11:45:27 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:57.227 11:45:27 -- nvmf/common.sh@120 -- # set +e 00:21:57.227 11:45:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:57.227 11:45:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:57.227 rmmod nvme_rdma 00:21:57.227 rmmod nvme_fabrics 00:21:57.227 11:45:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:57.227 11:45:27 -- nvmf/common.sh@124 -- # set -e 00:21:57.486 11:45:27 -- nvmf/common.sh@125 -- # return 0 00:21:57.486 11:45:27 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:21:57.486 11:45:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:57.487 11:45:27 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:21:57.487 11:45:27 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:21:57.487 11:45:27 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:57.487 11:45:27 -- nvmf/common.sh@675 -- # echo 0 00:21:57.487 11:45:28 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:57.487 11:45:28 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:57.487 11:45:28 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:57.487 11:45:28 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:57.487 11:45:28 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:21:57.487 11:45:28 -- nvmf/common.sh@684 -- # modprobe -r nvmet_rdma nvmet 00:21:57.487 11:45:28 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:22:00.775 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:00.775 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:00.775 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:00.775 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:00.775 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:00.775 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:00.775 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:00.775 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:00.775 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:00.775 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:00.775 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:00.775 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:00.775 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:00.775 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:00.775 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:00.775 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:06.113 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:22:06.113 00:22:06.113 real 0m18.710s 00:22:06.113 user 0m3.965s 00:22:06.113 sys 0m8.979s 00:22:06.113 11:45:36 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:06.113 11:45:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.113 ************************************ 00:22:06.113 END TEST nvmf_identify_kernel_target 00:22:06.113 ************************************ 00:22:06.113 11:45:36 -- nvmf/nvmf.sh@103 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:22:06.113 11:45:36 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:06.113 11:45:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:06.113 11:45:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.113 ************************************ 00:22:06.113 START TEST nvmf_auth 00:22:06.113 ************************************ 00:22:06.113 11:45:36 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:22:06.113 * Looking for test storage... 00:22:06.113 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:06.113 11:45:36 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.113 11:45:36 -- nvmf/common.sh@7 -- # uname -s 00:22:06.113 11:45:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.113 11:45:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.113 11:45:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.113 11:45:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.113 11:45:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.113 11:45:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.113 11:45:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.113 11:45:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.113 11:45:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.113 11:45:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.113 11:45:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:06.113 11:45:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:22:06.113 11:45:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.113 11:45:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.113 11:45:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.113 11:45:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.113 11:45:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:06.113 11:45:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.113 11:45:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.113 11:45:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.113 11:45:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.114 11:45:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.114 11:45:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.114 11:45:36 -- paths/export.sh@5 -- # export PATH 00:22:06.114 11:45:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.114 11:45:36 -- nvmf/common.sh@47 -- # : 0 00:22:06.114 11:45:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:06.114 11:45:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:06.114 11:45:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.114 11:45:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.114 11:45:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.114 11:45:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:06.114 11:45:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:06.114 11:45:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:06.114 11:45:36 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:06.114 11:45:36 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:06.114 11:45:36 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:06.114 11:45:36 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:06.114 11:45:36 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:06.114 11:45:36 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:06.114 11:45:36 -- host/auth.sh@21 -- # keys=() 00:22:06.114 11:45:36 -- host/auth.sh@21 -- # ckeys=() 00:22:06.114 11:45:36 -- host/auth.sh@81 -- # nvmftestinit 00:22:06.114 11:45:36 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:22:06.114 11:45:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.114 11:45:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:06.114 11:45:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:06.114 11:45:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:06.114 11:45:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.114 11:45:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.114 11:45:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.114 11:45:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:06.114 11:45:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:06.114 11:45:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:06.114 11:45:36 -- common/autotest_common.sh@10 -- # set +x 00:22:12.684 11:45:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:12.684 11:45:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:12.684 11:45:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:12.684 11:45:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:12.684 11:45:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:12.684 11:45:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:12.684 11:45:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:12.684 11:45:42 -- nvmf/common.sh@295 -- # net_devs=() 00:22:12.684 11:45:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:12.684 11:45:42 -- nvmf/common.sh@296 -- # e810=() 00:22:12.684 11:45:42 -- nvmf/common.sh@296 -- # local -ga e810 00:22:12.684 11:45:42 -- nvmf/common.sh@297 -- # x722=() 00:22:12.684 11:45:42 -- nvmf/common.sh@297 -- # local -ga x722 00:22:12.684 11:45:42 -- nvmf/common.sh@298 -- # mlx=() 00:22:12.684 11:45:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:12.684 11:45:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.684 11:45:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.684 11:45:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.684 11:45:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.684 11:45:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.684 11:45:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.684 11:45:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.684 11:45:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.684 11:45:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.684 11:45:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.684 11:45:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.684 11:45:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:12.684 11:45:42 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:12.684 11:45:42 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:12.684 11:45:42 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:12.684 11:45:42 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:12.684 11:45:42 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:12.684 11:45:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:12.684 11:45:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:12.684 11:45:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:22:12.684 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:22:12.684 11:45:42 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:12.684 11:45:42 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:12.684 11:45:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:12.684 11:45:42 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:12.684 11:45:42 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:12.684 11:45:42 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:12.684 11:45:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:12.684 11:45:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:22:12.684 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:22:12.684 11:45:42 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:12.684 11:45:42 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:12.684 11:45:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:12.684 11:45:42 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:12.684 11:45:42 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:12.684 11:45:42 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:12.684 11:45:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:12.684 11:45:42 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:12.684 11:45:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:12.684 11:45:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.684 11:45:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:12.684 11:45:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.684 11:45:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:12.684 Found net devices under 0000:18:00.0: mlx_0_0 00:22:12.684 11:45:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.684 11:45:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:12.684 11:45:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.684 11:45:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:12.684 11:45:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.684 11:45:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:12.684 Found net devices under 0000:18:00.1: mlx_0_1 00:22:12.684 11:45:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.684 11:45:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:12.684 11:45:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:12.684 11:45:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:12.684 11:45:42 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:22:12.684 11:45:42 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:22:12.684 11:45:42 -- nvmf/common.sh@409 -- # rdma_device_init 00:22:12.684 11:45:42 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:22:12.684 11:45:42 -- nvmf/common.sh@58 -- # uname 00:22:12.684 11:45:42 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:12.684 11:45:42 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:12.684 11:45:42 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:12.684 11:45:42 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:12.684 11:45:42 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:12.684 11:45:42 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:12.684 11:45:42 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:12.684 11:45:42 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:12.684 11:45:43 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:22:12.684 11:45:43 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:12.684 11:45:43 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:12.684 11:45:43 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:12.684 11:45:43 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:12.684 11:45:43 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:12.684 11:45:43 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:12.684 11:45:43 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:12.684 11:45:43 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:12.684 11:45:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.684 11:45:43 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:12.684 11:45:43 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:12.684 11:45:43 -- nvmf/common.sh@105 -- # continue 2 00:22:12.684 11:45:43 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:12.684 11:45:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.684 11:45:43 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:12.684 11:45:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.684 11:45:43 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:12.684 11:45:43 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:12.685 11:45:43 -- nvmf/common.sh@105 -- # continue 2 00:22:12.685 11:45:43 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:12.685 11:45:43 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:12.685 11:45:43 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:12.685 11:45:43 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:12.685 11:45:43 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:12.685 11:45:43 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:12.685 11:45:43 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:12.685 11:45:43 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:12.685 11:45:43 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:12.685 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:12.685 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:22:12.685 altname enp24s0f0np0 00:22:12.685 altname ens785f0np0 00:22:12.685 inet 192.168.100.8/24 scope global mlx_0_0 00:22:12.685 valid_lft forever preferred_lft forever 00:22:12.685 11:45:43 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:12.685 11:45:43 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:12.685 11:45:43 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:12.685 11:45:43 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:12.685 11:45:43 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:12.685 11:45:43 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:12.685 11:45:43 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:12.685 11:45:43 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:12.685 11:45:43 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:12.685 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:12.685 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:22:12.685 altname enp24s0f1np1 00:22:12.685 altname ens785f1np1 00:22:12.685 inet 192.168.100.9/24 scope global mlx_0_1 00:22:12.685 valid_lft forever preferred_lft forever 00:22:12.685 11:45:43 -- nvmf/common.sh@411 -- # return 0 00:22:12.685 11:45:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:12.685 11:45:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:12.685 11:45:43 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:22:12.685 11:45:43 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:22:12.685 11:45:43 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:12.685 11:45:43 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:12.685 11:45:43 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:12.685 11:45:43 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:12.685 11:45:43 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:12.685 11:45:43 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:12.685 11:45:43 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:12.685 11:45:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.685 11:45:43 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:12.685 11:45:43 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:12.685 11:45:43 -- nvmf/common.sh@105 -- # continue 2 00:22:12.685 11:45:43 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:12.685 11:45:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.685 11:45:43 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:12.685 11:45:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.685 11:45:43 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:12.685 11:45:43 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:12.685 11:45:43 -- nvmf/common.sh@105 -- # continue 2 00:22:12.685 11:45:43 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:12.685 11:45:43 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:12.685 11:45:43 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:12.685 11:45:43 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:12.685 11:45:43 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:12.685 11:45:43 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:12.685 11:45:43 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:12.685 11:45:43 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:12.685 11:45:43 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:12.685 11:45:43 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:12.685 11:45:43 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:12.685 11:45:43 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:12.685 11:45:43 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:22:12.685 192.168.100.9' 00:22:12.685 11:45:43 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:12.685 192.168.100.9' 00:22:12.685 11:45:43 -- nvmf/common.sh@446 -- # head -n 1 00:22:12.685 11:45:43 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:12.685 11:45:43 -- nvmf/common.sh@447 -- # tail -n +2 00:22:12.685 11:45:43 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:22:12.685 192.168.100.9' 00:22:12.685 11:45:43 -- nvmf/common.sh@447 -- # head -n 1 00:22:12.685 11:45:43 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:12.685 11:45:43 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:22:12.685 11:45:43 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:12.685 11:45:43 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:22:12.685 11:45:43 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:22:12.685 11:45:43 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:22:12.685 11:45:43 -- host/auth.sh@82 -- # nvmfappstart -L nvme_auth 00:22:12.685 11:45:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:12.685 11:45:43 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:12.685 11:45:43 -- common/autotest_common.sh@10 -- # set +x 00:22:12.685 11:45:43 -- nvmf/common.sh@470 -- # nvmfpid=3110314 00:22:12.685 11:45:43 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:12.685 11:45:43 -- nvmf/common.sh@471 -- # waitforlisten 3110314 00:22:12.685 11:45:43 -- common/autotest_common.sh@827 -- # '[' -z 3110314 ']' 00:22:12.685 11:45:43 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.685 11:45:43 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:12.685 11:45:43 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.685 11:45:43 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:12.685 11:45:43 -- common/autotest_common.sh@10 -- # set +x 00:22:13.622 11:45:44 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:13.622 11:45:44 -- common/autotest_common.sh@860 -- # return 0 00:22:13.622 11:45:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:13.622 11:45:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.622 11:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:13.622 11:45:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.622 11:45:44 -- host/auth.sh@83 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:13.622 11:45:44 -- host/auth.sh@86 -- # gen_key null 32 00:22:13.622 11:45:44 -- host/auth.sh@55 -- # local digest len file key 00:22:13.622 11:45:44 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:13.622 11:45:44 -- host/auth.sh@56 -- # local -A digests 00:22:13.622 11:45:44 -- host/auth.sh@58 -- # digest=null 00:22:13.622 11:45:44 -- host/auth.sh@58 -- # len=32 00:22:13.622 11:45:44 -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:13.622 11:45:44 -- host/auth.sh@59 -- # key=376402a453f3fdd4f71e7a3953bbdbbe 00:22:13.622 11:45:44 -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:22:13.622 11:45:44 -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.x0v 00:22:13.622 11:45:44 -- host/auth.sh@61 -- # format_dhchap_key 376402a453f3fdd4f71e7a3953bbdbbe 0 00:22:13.622 11:45:44 -- nvmf/common.sh@708 -- # format_key DHHC-1 376402a453f3fdd4f71e7a3953bbdbbe 0 00:22:13.622 11:45:44 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:13.622 11:45:44 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:13.622 11:45:44 -- nvmf/common.sh@693 -- # key=376402a453f3fdd4f71e7a3953bbdbbe 00:22:13.622 11:45:44 -- nvmf/common.sh@693 -- # digest=0 00:22:13.622 11:45:44 -- nvmf/common.sh@694 -- # python - 00:22:13.622 11:45:44 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.x0v 00:22:13.622 11:45:44 -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.x0v 00:22:13.622 11:45:44 -- host/auth.sh@86 -- # keys[0]=/tmp/spdk.key-null.x0v 00:22:13.622 11:45:44 -- host/auth.sh@86 -- # gen_key sha512 64 00:22:13.622 11:45:44 -- host/auth.sh@55 -- # local digest len file key 00:22:13.622 11:45:44 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:13.622 11:45:44 -- host/auth.sh@56 -- # local -A digests 00:22:13.622 11:45:44 -- host/auth.sh@58 -- # digest=sha512 00:22:13.622 11:45:44 -- host/auth.sh@58 -- # len=64 00:22:13.622 11:45:44 -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:13.622 11:45:44 -- host/auth.sh@59 -- # key=84ac284e989fff31a7d606e61d53e294a0833c74dcc3be0d8052db5f6544b82a 00:22:13.622 11:45:44 -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:22:13.622 11:45:44 -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.iWG 00:22:13.622 11:45:44 -- host/auth.sh@61 -- # format_dhchap_key 84ac284e989fff31a7d606e61d53e294a0833c74dcc3be0d8052db5f6544b82a 3 00:22:13.622 11:45:44 -- nvmf/common.sh@708 -- # format_key DHHC-1 84ac284e989fff31a7d606e61d53e294a0833c74dcc3be0d8052db5f6544b82a 3 00:22:13.622 11:45:44 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:13.622 11:45:44 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:13.622 11:45:44 -- nvmf/common.sh@693 -- # key=84ac284e989fff31a7d606e61d53e294a0833c74dcc3be0d8052db5f6544b82a 00:22:13.622 11:45:44 -- nvmf/common.sh@693 -- # digest=3 00:22:13.622 11:45:44 -- nvmf/common.sh@694 -- # python - 00:22:13.622 11:45:44 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.iWG 00:22:13.622 11:45:44 -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.iWG 00:22:13.622 11:45:44 -- host/auth.sh@86 -- # ckeys[0]=/tmp/spdk.key-sha512.iWG 00:22:13.622 11:45:44 -- host/auth.sh@87 -- # gen_key null 48 00:22:13.622 11:45:44 -- host/auth.sh@55 -- # local digest len file key 00:22:13.622 11:45:44 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:13.622 11:45:44 -- host/auth.sh@56 -- # local -A digests 00:22:13.622 11:45:44 -- host/auth.sh@58 -- # digest=null 00:22:13.622 11:45:44 -- host/auth.sh@58 -- # len=48 00:22:13.622 11:45:44 -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:13.622 11:45:44 -- host/auth.sh@59 -- # key=9279a31c1c2b0d8dde0741f7a215f08f73248cd92e6b011c 00:22:13.622 11:45:44 -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:22:13.622 11:45:44 -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.Ydb 00:22:13.622 11:45:44 -- host/auth.sh@61 -- # format_dhchap_key 9279a31c1c2b0d8dde0741f7a215f08f73248cd92e6b011c 0 00:22:13.622 11:45:44 -- nvmf/common.sh@708 -- # format_key DHHC-1 9279a31c1c2b0d8dde0741f7a215f08f73248cd92e6b011c 0 00:22:13.622 11:45:44 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:13.622 11:45:44 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:13.622 11:45:44 -- nvmf/common.sh@693 -- # key=9279a31c1c2b0d8dde0741f7a215f08f73248cd92e6b011c 00:22:13.622 11:45:44 -- nvmf/common.sh@693 -- # digest=0 00:22:13.622 11:45:44 -- nvmf/common.sh@694 -- # python - 00:22:13.622 11:45:44 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.Ydb 00:22:13.622 11:45:44 -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.Ydb 00:22:13.622 11:45:44 -- host/auth.sh@87 -- # keys[1]=/tmp/spdk.key-null.Ydb 00:22:13.622 11:45:44 -- host/auth.sh@87 -- # gen_key sha384 48 00:22:13.622 11:45:44 -- host/auth.sh@55 -- # local digest len file key 00:22:13.622 11:45:44 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:13.622 11:45:44 -- host/auth.sh@56 -- # local -A digests 00:22:13.622 11:45:44 -- host/auth.sh@58 -- # digest=sha384 00:22:13.622 11:45:44 -- host/auth.sh@58 -- # len=48 00:22:13.622 11:45:44 -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:13.622 11:45:44 -- host/auth.sh@59 -- # key=9a90fc3fd867702e9bd60e268f5a806cd89d61ffe1ce4709 00:22:13.622 11:45:44 -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:22:13.622 11:45:44 -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.n18 00:22:13.622 11:45:44 -- host/auth.sh@61 -- # format_dhchap_key 9a90fc3fd867702e9bd60e268f5a806cd89d61ffe1ce4709 2 00:22:13.622 11:45:44 -- nvmf/common.sh@708 -- # format_key DHHC-1 9a90fc3fd867702e9bd60e268f5a806cd89d61ffe1ce4709 2 00:22:13.622 11:45:44 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:13.622 11:45:44 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:13.622 11:45:44 -- nvmf/common.sh@693 -- # key=9a90fc3fd867702e9bd60e268f5a806cd89d61ffe1ce4709 00:22:13.622 11:45:44 -- nvmf/common.sh@693 -- # digest=2 00:22:13.622 11:45:44 -- nvmf/common.sh@694 -- # python - 00:22:13.622 11:45:44 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.n18 00:22:13.622 11:45:44 -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.n18 00:22:13.622 11:45:44 -- host/auth.sh@87 -- # ckeys[1]=/tmp/spdk.key-sha384.n18 00:22:13.622 11:45:44 -- host/auth.sh@88 -- # gen_key sha256 32 00:22:13.622 11:45:44 -- host/auth.sh@55 -- # local digest len file key 00:22:13.622 11:45:44 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:13.622 11:45:44 -- host/auth.sh@56 -- # local -A digests 00:22:13.622 11:45:44 -- host/auth.sh@58 -- # digest=sha256 00:22:13.622 11:45:44 -- host/auth.sh@58 -- # len=32 00:22:13.622 11:45:44 -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:13.881 11:45:44 -- host/auth.sh@59 -- # key=e55151d2bc313401a9734464e03749fb 00:22:13.881 11:45:44 -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:22:13.881 11:45:44 -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.XNG 00:22:13.881 11:45:44 -- host/auth.sh@61 -- # format_dhchap_key e55151d2bc313401a9734464e03749fb 1 00:22:13.881 11:45:44 -- nvmf/common.sh@708 -- # format_key DHHC-1 e55151d2bc313401a9734464e03749fb 1 00:22:13.881 11:45:44 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:13.881 11:45:44 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:13.881 11:45:44 -- nvmf/common.sh@693 -- # key=e55151d2bc313401a9734464e03749fb 00:22:13.881 11:45:44 -- nvmf/common.sh@693 -- # digest=1 00:22:13.881 11:45:44 -- nvmf/common.sh@694 -- # python - 00:22:13.881 11:45:44 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.XNG 00:22:13.881 11:45:44 -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.XNG 00:22:13.881 11:45:44 -- host/auth.sh@88 -- # keys[2]=/tmp/spdk.key-sha256.XNG 00:22:13.881 11:45:44 -- host/auth.sh@88 -- # gen_key sha256 32 00:22:13.881 11:45:44 -- host/auth.sh@55 -- # local digest len file key 00:22:13.881 11:45:44 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:13.881 11:45:44 -- host/auth.sh@56 -- # local -A digests 00:22:13.881 11:45:44 -- host/auth.sh@58 -- # digest=sha256 00:22:13.881 11:45:44 -- host/auth.sh@58 -- # len=32 00:22:13.881 11:45:44 -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:13.881 11:45:44 -- host/auth.sh@59 -- # key=1f0a27884f58d1cb01476cf0f1af4f55 00:22:13.881 11:45:44 -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:22:13.881 11:45:44 -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.yg4 00:22:13.881 11:45:44 -- host/auth.sh@61 -- # format_dhchap_key 1f0a27884f58d1cb01476cf0f1af4f55 1 00:22:13.881 11:45:44 -- nvmf/common.sh@708 -- # format_key DHHC-1 1f0a27884f58d1cb01476cf0f1af4f55 1 00:22:13.881 11:45:44 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:13.881 11:45:44 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:13.881 11:45:44 -- nvmf/common.sh@693 -- # key=1f0a27884f58d1cb01476cf0f1af4f55 00:22:13.881 11:45:44 -- nvmf/common.sh@693 -- # digest=1 00:22:13.881 11:45:44 -- nvmf/common.sh@694 -- # python - 00:22:13.881 11:45:44 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.yg4 00:22:13.881 11:45:44 -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.yg4 00:22:13.881 11:45:44 -- host/auth.sh@88 -- # ckeys[2]=/tmp/spdk.key-sha256.yg4 00:22:13.881 11:45:44 -- host/auth.sh@89 -- # gen_key sha384 48 00:22:13.881 11:45:44 -- host/auth.sh@55 -- # local digest len file key 00:22:13.881 11:45:44 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:13.881 11:45:44 -- host/auth.sh@56 -- # local -A digests 00:22:13.881 11:45:44 -- host/auth.sh@58 -- # digest=sha384 00:22:13.881 11:45:44 -- host/auth.sh@58 -- # len=48 00:22:13.881 11:45:44 -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:13.881 11:45:44 -- host/auth.sh@59 -- # key=3e1ab9f8bd822adc090dc48710c40978286fd6a32d45c467 00:22:13.881 11:45:44 -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:22:13.881 11:45:44 -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.oMz 00:22:13.881 11:45:44 -- host/auth.sh@61 -- # format_dhchap_key 3e1ab9f8bd822adc090dc48710c40978286fd6a32d45c467 2 00:22:13.881 11:45:44 -- nvmf/common.sh@708 -- # format_key DHHC-1 3e1ab9f8bd822adc090dc48710c40978286fd6a32d45c467 2 00:22:13.881 11:45:44 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:13.881 11:45:44 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:13.881 11:45:44 -- nvmf/common.sh@693 -- # key=3e1ab9f8bd822adc090dc48710c40978286fd6a32d45c467 00:22:13.881 11:45:44 -- nvmf/common.sh@693 -- # digest=2 00:22:13.881 11:45:44 -- nvmf/common.sh@694 -- # python - 00:22:13.881 11:45:44 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.oMz 00:22:13.881 11:45:44 -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.oMz 00:22:13.881 11:45:44 -- host/auth.sh@89 -- # keys[3]=/tmp/spdk.key-sha384.oMz 00:22:13.881 11:45:44 -- host/auth.sh@89 -- # gen_key null 32 00:22:13.881 11:45:44 -- host/auth.sh@55 -- # local digest len file key 00:22:13.881 11:45:44 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:13.881 11:45:44 -- host/auth.sh@56 -- # local -A digests 00:22:13.881 11:45:44 -- host/auth.sh@58 -- # digest=null 00:22:13.881 11:45:44 -- host/auth.sh@58 -- # len=32 00:22:13.881 11:45:44 -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:13.881 11:45:44 -- host/auth.sh@59 -- # key=dd3ccd40ee1d93a10c197ec21934e686 00:22:13.881 11:45:44 -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:22:13.881 11:45:44 -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.iP8 00:22:13.881 11:45:44 -- host/auth.sh@61 -- # format_dhchap_key dd3ccd40ee1d93a10c197ec21934e686 0 00:22:13.881 11:45:44 -- nvmf/common.sh@708 -- # format_key DHHC-1 dd3ccd40ee1d93a10c197ec21934e686 0 00:22:13.881 11:45:44 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:13.881 11:45:44 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:13.881 11:45:44 -- nvmf/common.sh@693 -- # key=dd3ccd40ee1d93a10c197ec21934e686 00:22:13.881 11:45:44 -- nvmf/common.sh@693 -- # digest=0 00:22:13.881 11:45:44 -- nvmf/common.sh@694 -- # python - 00:22:13.881 11:45:44 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.iP8 00:22:13.881 11:45:44 -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.iP8 00:22:14.139 11:45:44 -- host/auth.sh@89 -- # ckeys[3]=/tmp/spdk.key-null.iP8 00:22:14.139 11:45:44 -- host/auth.sh@90 -- # gen_key sha512 64 00:22:14.139 11:45:44 -- host/auth.sh@55 -- # local digest len file key 00:22:14.139 11:45:44 -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:14.139 11:45:44 -- host/auth.sh@56 -- # local -A digests 00:22:14.139 11:45:44 -- host/auth.sh@58 -- # digest=sha512 00:22:14.139 11:45:44 -- host/auth.sh@58 -- # len=64 00:22:14.139 11:45:44 -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:14.139 11:45:44 -- host/auth.sh@59 -- # key=a091d52c0b7574a56f3e2db96d0cd7de2fea29992383308b84b29b7d41424811 00:22:14.139 11:45:44 -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:22:14.139 11:45:44 -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.Rc5 00:22:14.139 11:45:44 -- host/auth.sh@61 -- # format_dhchap_key a091d52c0b7574a56f3e2db96d0cd7de2fea29992383308b84b29b7d41424811 3 00:22:14.139 11:45:44 -- nvmf/common.sh@708 -- # format_key DHHC-1 a091d52c0b7574a56f3e2db96d0cd7de2fea29992383308b84b29b7d41424811 3 00:22:14.139 11:45:44 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:14.139 11:45:44 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:14.139 11:45:44 -- nvmf/common.sh@693 -- # key=a091d52c0b7574a56f3e2db96d0cd7de2fea29992383308b84b29b7d41424811 00:22:14.139 11:45:44 -- nvmf/common.sh@693 -- # digest=3 00:22:14.139 11:45:44 -- nvmf/common.sh@694 -- # python - 00:22:14.139 11:45:44 -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.Rc5 00:22:14.139 11:45:44 -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.Rc5 00:22:14.139 11:45:44 -- host/auth.sh@90 -- # keys[4]=/tmp/spdk.key-sha512.Rc5 00:22:14.139 11:45:44 -- host/auth.sh@90 -- # ckeys[4]= 00:22:14.139 11:45:44 -- host/auth.sh@92 -- # waitforlisten 3110314 00:22:14.139 11:45:44 -- common/autotest_common.sh@827 -- # '[' -z 3110314 ']' 00:22:14.139 11:45:44 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.139 11:45:44 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:14.139 11:45:44 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.139 11:45:44 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:14.139 11:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:14.139 11:45:44 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:14.139 11:45:44 -- common/autotest_common.sh@860 -- # return 0 00:22:14.139 11:45:44 -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:22:14.139 11:45:44 -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.x0v 00:22:14.139 11:45:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.139 11:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:14.139 11:45:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.139 11:45:44 -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha512.iWG ]] 00:22:14.139 11:45:44 -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iWG 00:22:14.139 11:45:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.139 11:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:14.398 11:45:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.398 11:45:44 -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:22:14.398 11:45:44 -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Ydb 00:22:14.398 11:45:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.398 11:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:14.398 11:45:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.398 11:45:44 -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha384.n18 ]] 00:22:14.398 11:45:44 -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.n18 00:22:14.398 11:45:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.398 11:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:14.398 11:45:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.398 11:45:44 -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:22:14.398 11:45:44 -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.XNG 00:22:14.398 11:45:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.398 11:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:14.398 11:45:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.398 11:45:44 -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha256.yg4 ]] 00:22:14.398 11:45:44 -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.yg4 00:22:14.398 11:45:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.398 11:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:14.398 11:45:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.398 11:45:44 -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:22:14.398 11:45:44 -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.oMz 00:22:14.398 11:45:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.398 11:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:14.398 11:45:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.398 11:45:44 -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-null.iP8 ]] 00:22:14.398 11:45:44 -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.iP8 00:22:14.398 11:45:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.398 11:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:14.398 11:45:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.398 11:45:44 -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:22:14.398 11:45:44 -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Rc5 00:22:14.398 11:45:44 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.398 11:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:14.398 11:45:44 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.398 11:45:44 -- host/auth.sh@95 -- # [[ -n '' ]] 00:22:14.398 11:45:44 -- host/auth.sh@98 -- # nvmet_auth_init 00:22:14.398 11:45:44 -- host/auth.sh@35 -- # get_main_ns_ip 00:22:14.398 11:45:44 -- nvmf/common.sh@717 -- # local ip 00:22:14.398 11:45:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:14.398 11:45:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:14.398 11:45:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.398 11:45:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.398 11:45:44 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:14.398 11:45:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:14.398 11:45:44 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:14.398 11:45:44 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:14.398 11:45:44 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:14.398 11:45:44 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:22:14.398 11:45:44 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:22:14.398 11:45:44 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:22:14.398 11:45:44 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:14.398 11:45:44 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:14.398 11:45:44 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:14.398 11:45:44 -- nvmf/common.sh@628 -- # local block nvme 00:22:14.398 11:45:44 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:22:14.398 11:45:44 -- nvmf/common.sh@631 -- # modprobe nvmet 00:22:14.398 11:45:44 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:14.398 11:45:44 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:22:17.689 Waiting for block devices as requested 00:22:17.689 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:22:17.689 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:17.689 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:17.689 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:17.689 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:17.948 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:17.948 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:17.948 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:18.207 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:18.207 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:18.207 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:18.465 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:18.465 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:18.465 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:18.724 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:18.724 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:18.724 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:19.659 11:45:50 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:19.659 11:45:50 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:19.659 11:45:50 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:22:19.659 11:45:50 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:22:19.659 11:45:50 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:19.659 11:45:50 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:22:19.659 11:45:50 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:22:19.659 11:45:50 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:19.659 11:45:50 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:19.659 No valid GPT data, bailing 00:22:19.659 11:45:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:19.659 11:45:50 -- scripts/common.sh@391 -- # pt= 00:22:19.659 11:45:50 -- scripts/common.sh@392 -- # return 1 00:22:19.659 11:45:50 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:22:19.659 11:45:50 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:22:19.659 11:45:50 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:19.659 11:45:50 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:19.659 11:45:50 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:19.659 11:45:50 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:19.659 11:45:50 -- nvmf/common.sh@656 -- # echo 1 00:22:19.659 11:45:50 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:22:19.659 11:45:50 -- nvmf/common.sh@658 -- # echo 1 00:22:19.659 11:45:50 -- nvmf/common.sh@660 -- # echo 192.168.100.8 00:22:19.659 11:45:50 -- nvmf/common.sh@661 -- # echo rdma 00:22:19.659 11:45:50 -- nvmf/common.sh@662 -- # echo 4420 00:22:19.659 11:45:50 -- nvmf/common.sh@663 -- # echo ipv4 00:22:19.659 11:45:50 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:19.659 11:45:50 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:22:19.659 00:22:19.659 Discovery Log Number of Records 2, Generation counter 2 00:22:19.659 =====Discovery Log Entry 0====== 00:22:19.659 trtype: rdma 00:22:19.659 adrfam: ipv4 00:22:19.659 subtype: current discovery subsystem 00:22:19.659 treq: not specified, sq flow control disable supported 00:22:19.659 portid: 1 00:22:19.659 trsvcid: 4420 00:22:19.659 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:19.659 traddr: 192.168.100.8 00:22:19.659 eflags: none 00:22:19.659 rdma_prtype: not specified 00:22:19.659 rdma_qptype: connected 00:22:19.659 rdma_cms: rdma-cm 00:22:19.659 rdma_pkey: 0x0000 00:22:19.659 =====Discovery Log Entry 1====== 00:22:19.659 trtype: rdma 00:22:19.659 adrfam: ipv4 00:22:19.659 subtype: nvme subsystem 00:22:19.659 treq: not specified, sq flow control disable supported 00:22:19.659 portid: 1 00:22:19.659 trsvcid: 4420 00:22:19.659 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:19.659 traddr: 192.168.100.8 00:22:19.659 eflags: none 00:22:19.659 rdma_prtype: not specified 00:22:19.659 rdma_qptype: connected 00:22:19.659 rdma_cms: rdma-cm 00:22:19.659 rdma_pkey: 0x0000 00:22:19.659 11:45:50 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:19.659 11:45:50 -- host/auth.sh@37 -- # echo 0 00:22:19.659 11:45:50 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:19.919 11:45:50 -- host/auth.sh@101 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:19.919 11:45:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.919 11:45:50 -- host/auth.sh@44 -- # digest=sha256 00:22:19.919 11:45:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:19.919 11:45:50 -- host/auth.sh@44 -- # keyid=1 00:22:19.919 11:45:50 -- host/auth.sh@45 -- # key=DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:19.919 11:45:50 -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:19.919 11:45:50 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:19.919 11:45:50 -- host/auth.sh@49 -- # echo ffdhe2048 00:22:19.919 11:45:50 -- host/auth.sh@50 -- # echo DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:19.919 11:45:50 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: ]] 00:22:19.919 11:45:50 -- host/auth.sh@51 -- # echo DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:19.919 11:45:50 -- host/auth.sh@106 -- # IFS=, 00:22:19.919 11:45:50 -- host/auth.sh@107 -- # printf %s sha256,sha384,sha512 00:22:19.919 11:45:50 -- host/auth.sh@106 -- # IFS=, 00:22:19.919 11:45:50 -- host/auth.sh@107 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:19.919 11:45:50 -- host/auth.sh@106 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:19.919 11:45:50 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:19.919 11:45:50 -- host/auth.sh@70 -- # digest=sha256,sha384,sha512 00:22:19.919 11:45:50 -- host/auth.sh@70 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:19.919 11:45:50 -- host/auth.sh@70 -- # keyid=1 00:22:19.919 11:45:50 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.919 11:45:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:19.919 11:45:50 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.919 11:45:50 -- common/autotest_common.sh@10 -- # set +x 00:22:19.919 11:45:50 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.919 11:45:50 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:19.919 11:45:50 -- nvmf/common.sh@717 -- # local ip 00:22:19.919 11:45:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:19.919 11:45:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:19.919 11:45:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.919 11:45:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.919 11:45:50 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:19.919 11:45:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:19.919 11:45:50 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:19.919 11:45:50 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:19.919 11:45:50 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:19.919 11:45:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.919 11:45:50 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.919 11:45:50 -- common/autotest_common.sh@10 -- # set +x 00:22:19.919 nvme0n1 00:22:19.919 11:45:50 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.919 11:45:50 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.919 11:45:50 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.919 11:45:50 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:19.919 11:45:50 -- common/autotest_common.sh@10 -- # set +x 00:22:19.919 11:45:50 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.179 11:45:50 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.179 11:45:50 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.179 11:45:50 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.179 11:45:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.179 11:45:50 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.179 11:45:50 -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:22:20.179 11:45:50 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:20.179 11:45:50 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:20.179 11:45:50 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:20.179 11:45:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.179 11:45:50 -- host/auth.sh@44 -- # digest=sha256 00:22:20.179 11:45:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:20.179 11:45:50 -- host/auth.sh@44 -- # keyid=0 00:22:20.179 11:45:50 -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:20.179 11:45:50 -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:20.179 11:45:50 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:20.179 11:45:50 -- host/auth.sh@49 -- # echo ffdhe2048 00:22:20.179 11:45:50 -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:20.179 11:45:50 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: ]] 00:22:20.179 11:45:50 -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:20.179 11:45:50 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 0 00:22:20.179 11:45:50 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:20.179 11:45:50 -- host/auth.sh@70 -- # digest=sha256 00:22:20.179 11:45:50 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:20.179 11:45:50 -- host/auth.sh@70 -- # keyid=0 00:22:20.179 11:45:50 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.179 11:45:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:20.179 11:45:50 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.179 11:45:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.179 11:45:50 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.179 11:45:50 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:20.179 11:45:50 -- nvmf/common.sh@717 -- # local ip 00:22:20.179 11:45:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:20.179 11:45:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:20.179 11:45:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.179 11:45:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.179 11:45:50 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:20.179 11:45:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:20.179 11:45:50 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:20.179 11:45:50 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:20.179 11:45:50 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:20.179 11:45:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.179 11:45:50 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.179 11:45:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.179 nvme0n1 00:22:20.179 11:45:50 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.179 11:45:50 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.179 11:45:50 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:20.179 11:45:50 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.179 11:45:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.179 11:45:50 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.438 11:45:50 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.438 11:45:50 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.438 11:45:50 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.438 11:45:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.438 11:45:50 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.439 11:45:50 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:20.439 11:45:50 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:20.439 11:45:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.439 11:45:50 -- host/auth.sh@44 -- # digest=sha256 00:22:20.439 11:45:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:20.439 11:45:50 -- host/auth.sh@44 -- # keyid=1 00:22:20.439 11:45:50 -- host/auth.sh@45 -- # key=DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:20.439 11:45:50 -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:20.439 11:45:50 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:20.439 11:45:50 -- host/auth.sh@49 -- # echo ffdhe2048 00:22:20.439 11:45:50 -- host/auth.sh@50 -- # echo DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:20.439 11:45:50 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: ]] 00:22:20.439 11:45:50 -- host/auth.sh@51 -- # echo DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:20.439 11:45:50 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 1 00:22:20.439 11:45:50 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:20.439 11:45:50 -- host/auth.sh@70 -- # digest=sha256 00:22:20.439 11:45:50 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:20.439 11:45:50 -- host/auth.sh@70 -- # keyid=1 00:22:20.439 11:45:50 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.439 11:45:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:20.439 11:45:50 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.439 11:45:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.439 11:45:50 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.439 11:45:50 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:20.439 11:45:50 -- nvmf/common.sh@717 -- # local ip 00:22:20.439 11:45:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:20.439 11:45:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:20.439 11:45:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.439 11:45:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.439 11:45:50 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:20.439 11:45:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:20.439 11:45:50 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:20.439 11:45:50 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:20.439 11:45:50 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:20.439 11:45:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.439 11:45:50 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.439 11:45:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.439 nvme0n1 00:22:20.439 11:45:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.439 11:45:51 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.439 11:45:51 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:20.439 11:45:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.439 11:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:20.439 11:45:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.698 11:45:51 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.698 11:45:51 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.698 11:45:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.698 11:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:20.698 11:45:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.698 11:45:51 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:20.698 11:45:51 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:20.698 11:45:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.698 11:45:51 -- host/auth.sh@44 -- # digest=sha256 00:22:20.698 11:45:51 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:20.698 11:45:51 -- host/auth.sh@44 -- # keyid=2 00:22:20.698 11:45:51 -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:20.698 11:45:51 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:20.698 11:45:51 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:20.698 11:45:51 -- host/auth.sh@49 -- # echo ffdhe2048 00:22:20.698 11:45:51 -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:20.698 11:45:51 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: ]] 00:22:20.698 11:45:51 -- host/auth.sh@51 -- # echo DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:20.698 11:45:51 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 2 00:22:20.698 11:45:51 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:20.698 11:45:51 -- host/auth.sh@70 -- # digest=sha256 00:22:20.698 11:45:51 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:20.698 11:45:51 -- host/auth.sh@70 -- # keyid=2 00:22:20.698 11:45:51 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.698 11:45:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:20.698 11:45:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.698 11:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:20.698 11:45:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.698 11:45:51 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:20.698 11:45:51 -- nvmf/common.sh@717 -- # local ip 00:22:20.698 11:45:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:20.698 11:45:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:20.698 11:45:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.698 11:45:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.698 11:45:51 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:20.698 11:45:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:20.698 11:45:51 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:20.698 11:45:51 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:20.698 11:45:51 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:20.698 11:45:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.698 11:45:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.698 11:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:20.698 nvme0n1 00:22:20.698 11:45:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.698 11:45:51 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.698 11:45:51 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:20.698 11:45:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.698 11:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:20.698 11:45:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.698 11:45:51 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.698 11:45:51 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.698 11:45:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.698 11:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:20.962 11:45:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.962 11:45:51 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:20.962 11:45:51 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:20.962 11:45:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.962 11:45:51 -- host/auth.sh@44 -- # digest=sha256 00:22:20.962 11:45:51 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:20.962 11:45:51 -- host/auth.sh@44 -- # keyid=3 00:22:20.962 11:45:51 -- host/auth.sh@45 -- # key=DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:20.962 11:45:51 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:20.962 11:45:51 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:20.962 11:45:51 -- host/auth.sh@49 -- # echo ffdhe2048 00:22:20.962 11:45:51 -- host/auth.sh@50 -- # echo DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:20.962 11:45:51 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: ]] 00:22:20.962 11:45:51 -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:20.962 11:45:51 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 3 00:22:20.962 11:45:51 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:20.962 11:45:51 -- host/auth.sh@70 -- # digest=sha256 00:22:20.962 11:45:51 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:20.962 11:45:51 -- host/auth.sh@70 -- # keyid=3 00:22:20.962 11:45:51 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.962 11:45:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:20.962 11:45:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.962 11:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:20.962 11:45:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.962 11:45:51 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:20.962 11:45:51 -- nvmf/common.sh@717 -- # local ip 00:22:20.962 11:45:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:20.962 11:45:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:20.962 11:45:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.962 11:45:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.962 11:45:51 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:20.962 11:45:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:20.962 11:45:51 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:20.963 11:45:51 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:20.963 11:45:51 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:20.963 11:45:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:20.963 11:45:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.963 11:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:20.963 nvme0n1 00:22:20.963 11:45:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.963 11:45:51 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.963 11:45:51 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:20.963 11:45:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.963 11:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:20.963 11:45:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.963 11:45:51 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.963 11:45:51 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.963 11:45:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.963 11:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.221 11:45:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.221 11:45:51 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:21.221 11:45:51 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:21.221 11:45:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.221 11:45:51 -- host/auth.sh@44 -- # digest=sha256 00:22:21.221 11:45:51 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:21.221 11:45:51 -- host/auth.sh@44 -- # keyid=4 00:22:21.221 11:45:51 -- host/auth.sh@45 -- # key=DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:21.221 11:45:51 -- host/auth.sh@46 -- # ckey= 00:22:21.221 11:45:51 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:21.221 11:45:51 -- host/auth.sh@49 -- # echo ffdhe2048 00:22:21.221 11:45:51 -- host/auth.sh@50 -- # echo DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:21.221 11:45:51 -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:21.221 11:45:51 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 4 00:22:21.221 11:45:51 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:21.221 11:45:51 -- host/auth.sh@70 -- # digest=sha256 00:22:21.221 11:45:51 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:21.221 11:45:51 -- host/auth.sh@70 -- # keyid=4 00:22:21.221 11:45:51 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.221 11:45:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:21.221 11:45:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.221 11:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.221 11:45:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.221 11:45:51 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:21.221 11:45:51 -- nvmf/common.sh@717 -- # local ip 00:22:21.221 11:45:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:21.221 11:45:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:21.221 11:45:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.221 11:45:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.221 11:45:51 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:21.221 11:45:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:21.221 11:45:51 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:21.221 11:45:51 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:21.221 11:45:51 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:21.221 11:45:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:21.221 11:45:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.221 11:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.222 nvme0n1 00:22:21.222 11:45:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.222 11:45:51 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.222 11:45:51 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:21.222 11:45:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.222 11:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.222 11:45:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.222 11:45:51 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.222 11:45:51 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.222 11:45:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.222 11:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.481 11:45:51 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.481 11:45:51 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.481 11:45:51 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:21.481 11:45:51 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:21.481 11:45:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.481 11:45:51 -- host/auth.sh@44 -- # digest=sha256 00:22:21.481 11:45:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:21.481 11:45:51 -- host/auth.sh@44 -- # keyid=0 00:22:21.481 11:45:51 -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:21.481 11:45:51 -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:21.481 11:45:51 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:21.481 11:45:51 -- host/auth.sh@49 -- # echo ffdhe3072 00:22:21.481 11:45:51 -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:21.481 11:45:51 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: ]] 00:22:21.481 11:45:51 -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:21.481 11:45:51 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 0 00:22:21.481 11:45:51 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:21.481 11:45:51 -- host/auth.sh@70 -- # digest=sha256 00:22:21.481 11:45:51 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:21.481 11:45:51 -- host/auth.sh@70 -- # keyid=0 00:22:21.481 11:45:51 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.481 11:45:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:21.481 11:45:51 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.481 11:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.481 11:45:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.481 11:45:52 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:21.481 11:45:52 -- nvmf/common.sh@717 -- # local ip 00:22:21.481 11:45:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:21.481 11:45:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:21.481 11:45:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.481 11:45:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.481 11:45:52 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:21.481 11:45:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:21.481 11:45:52 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:21.481 11:45:52 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:21.481 11:45:52 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:21.481 11:45:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.481 11:45:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.481 11:45:52 -- common/autotest_common.sh@10 -- # set +x 00:22:21.481 nvme0n1 00:22:21.481 11:45:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.481 11:45:52 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.481 11:45:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.481 11:45:52 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:21.481 11:45:52 -- common/autotest_common.sh@10 -- # set +x 00:22:21.481 11:45:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.740 11:45:52 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.740 11:45:52 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.740 11:45:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.740 11:45:52 -- common/autotest_common.sh@10 -- # set +x 00:22:21.740 11:45:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.740 11:45:52 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:21.740 11:45:52 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:21.740 11:45:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.740 11:45:52 -- host/auth.sh@44 -- # digest=sha256 00:22:21.740 11:45:52 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:21.740 11:45:52 -- host/auth.sh@44 -- # keyid=1 00:22:21.740 11:45:52 -- host/auth.sh@45 -- # key=DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:21.740 11:45:52 -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:21.740 11:45:52 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:21.740 11:45:52 -- host/auth.sh@49 -- # echo ffdhe3072 00:22:21.740 11:45:52 -- host/auth.sh@50 -- # echo DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:21.740 11:45:52 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: ]] 00:22:21.740 11:45:52 -- host/auth.sh@51 -- # echo DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:21.740 11:45:52 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 1 00:22:21.740 11:45:52 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:21.740 11:45:52 -- host/auth.sh@70 -- # digest=sha256 00:22:21.740 11:45:52 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:21.740 11:45:52 -- host/auth.sh@70 -- # keyid=1 00:22:21.740 11:45:52 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.740 11:45:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:21.740 11:45:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.740 11:45:52 -- common/autotest_common.sh@10 -- # set +x 00:22:21.740 11:45:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.740 11:45:52 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:21.740 11:45:52 -- nvmf/common.sh@717 -- # local ip 00:22:21.740 11:45:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:21.740 11:45:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:21.740 11:45:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.740 11:45:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.740 11:45:52 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:21.740 11:45:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:21.740 11:45:52 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:21.740 11:45:52 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:21.740 11:45:52 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:21.740 11:45:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.740 11:45:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.740 11:45:52 -- common/autotest_common.sh@10 -- # set +x 00:22:21.740 nvme0n1 00:22:21.740 11:45:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.740 11:45:52 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.740 11:45:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.740 11:45:52 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:21.740 11:45:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.010 11:45:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.010 11:45:52 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.010 11:45:52 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.010 11:45:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.010 11:45:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.010 11:45:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.010 11:45:52 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:22.010 11:45:52 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:22.010 11:45:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.010 11:45:52 -- host/auth.sh@44 -- # digest=sha256 00:22:22.010 11:45:52 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:22.010 11:45:52 -- host/auth.sh@44 -- # keyid=2 00:22:22.010 11:45:52 -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:22.010 11:45:52 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:22.010 11:45:52 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:22.010 11:45:52 -- host/auth.sh@49 -- # echo ffdhe3072 00:22:22.010 11:45:52 -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:22.010 11:45:52 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: ]] 00:22:22.010 11:45:52 -- host/auth.sh@51 -- # echo DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:22.010 11:45:52 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 2 00:22:22.010 11:45:52 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:22.010 11:45:52 -- host/auth.sh@70 -- # digest=sha256 00:22:22.010 11:45:52 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:22.010 11:45:52 -- host/auth.sh@70 -- # keyid=2 00:22:22.010 11:45:52 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.010 11:45:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:22.010 11:45:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.010 11:45:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.010 11:45:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.010 11:45:52 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:22.010 11:45:52 -- nvmf/common.sh@717 -- # local ip 00:22:22.010 11:45:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:22.010 11:45:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:22.010 11:45:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.010 11:45:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.010 11:45:52 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:22.010 11:45:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:22.010 11:45:52 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:22.010 11:45:52 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:22.010 11:45:52 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:22.010 11:45:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.010 11:45:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.010 11:45:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.010 nvme0n1 00:22:22.010 11:45:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.269 11:45:52 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.269 11:45:52 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:22.269 11:45:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.269 11:45:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.269 11:45:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.269 11:45:52 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.269 11:45:52 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.269 11:45:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.269 11:45:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.269 11:45:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.269 11:45:52 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:22.269 11:45:52 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:22.269 11:45:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.269 11:45:52 -- host/auth.sh@44 -- # digest=sha256 00:22:22.269 11:45:52 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:22.269 11:45:52 -- host/auth.sh@44 -- # keyid=3 00:22:22.269 11:45:52 -- host/auth.sh@45 -- # key=DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:22.269 11:45:52 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:22.269 11:45:52 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:22.269 11:45:52 -- host/auth.sh@49 -- # echo ffdhe3072 00:22:22.269 11:45:52 -- host/auth.sh@50 -- # echo DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:22.269 11:45:52 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: ]] 00:22:22.269 11:45:52 -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:22.269 11:45:52 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 3 00:22:22.269 11:45:52 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:22.269 11:45:52 -- host/auth.sh@70 -- # digest=sha256 00:22:22.269 11:45:52 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:22.269 11:45:52 -- host/auth.sh@70 -- # keyid=3 00:22:22.269 11:45:52 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.269 11:45:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:22.269 11:45:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.269 11:45:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.269 11:45:52 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.269 11:45:52 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:22.269 11:45:52 -- nvmf/common.sh@717 -- # local ip 00:22:22.269 11:45:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:22.269 11:45:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:22.269 11:45:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.269 11:45:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.269 11:45:52 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:22.269 11:45:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:22.269 11:45:52 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:22.269 11:45:52 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:22.269 11:45:52 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:22.269 11:45:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:22.269 11:45:52 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.269 11:45:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.527 nvme0n1 00:22:22.527 11:45:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.527 11:45:53 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.527 11:45:53 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:22.527 11:45:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.527 11:45:53 -- common/autotest_common.sh@10 -- # set +x 00:22:22.527 11:45:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.527 11:45:53 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.527 11:45:53 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.527 11:45:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.527 11:45:53 -- common/autotest_common.sh@10 -- # set +x 00:22:22.527 11:45:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.527 11:45:53 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:22.527 11:45:53 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:22.527 11:45:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.527 11:45:53 -- host/auth.sh@44 -- # digest=sha256 00:22:22.527 11:45:53 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:22.527 11:45:53 -- host/auth.sh@44 -- # keyid=4 00:22:22.527 11:45:53 -- host/auth.sh@45 -- # key=DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:22.527 11:45:53 -- host/auth.sh@46 -- # ckey= 00:22:22.527 11:45:53 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:22.527 11:45:53 -- host/auth.sh@49 -- # echo ffdhe3072 00:22:22.528 11:45:53 -- host/auth.sh@50 -- # echo DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:22.528 11:45:53 -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:22.528 11:45:53 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 4 00:22:22.528 11:45:53 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:22.528 11:45:53 -- host/auth.sh@70 -- # digest=sha256 00:22:22.528 11:45:53 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:22.528 11:45:53 -- host/auth.sh@70 -- # keyid=4 00:22:22.528 11:45:53 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.528 11:45:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:22.528 11:45:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.528 11:45:53 -- common/autotest_common.sh@10 -- # set +x 00:22:22.528 11:45:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.528 11:45:53 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:22.528 11:45:53 -- nvmf/common.sh@717 -- # local ip 00:22:22.528 11:45:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:22.528 11:45:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:22.528 11:45:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.528 11:45:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.528 11:45:53 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:22.528 11:45:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:22.528 11:45:53 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:22.528 11:45:53 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:22.528 11:45:53 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:22.528 11:45:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:22.528 11:45:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.528 11:45:53 -- common/autotest_common.sh@10 -- # set +x 00:22:22.787 nvme0n1 00:22:22.787 11:45:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.787 11:45:53 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.787 11:45:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.787 11:45:53 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:22.787 11:45:53 -- common/autotest_common.sh@10 -- # set +x 00:22:22.787 11:45:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.787 11:45:53 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.787 11:45:53 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.787 11:45:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.787 11:45:53 -- common/autotest_common.sh@10 -- # set +x 00:22:22.787 11:45:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.787 11:45:53 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:22.787 11:45:53 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:22.787 11:45:53 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:22.787 11:45:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.787 11:45:53 -- host/auth.sh@44 -- # digest=sha256 00:22:22.787 11:45:53 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:22.787 11:45:53 -- host/auth.sh@44 -- # keyid=0 00:22:22.787 11:45:53 -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:22.787 11:45:53 -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:22.787 11:45:53 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:22.787 11:45:53 -- host/auth.sh@49 -- # echo ffdhe4096 00:22:22.787 11:45:53 -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:22.787 11:45:53 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: ]] 00:22:22.787 11:45:53 -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:22.787 11:45:53 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 0 00:22:22.787 11:45:53 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:22.787 11:45:53 -- host/auth.sh@70 -- # digest=sha256 00:22:22.787 11:45:53 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:22.787 11:45:53 -- host/auth.sh@70 -- # keyid=0 00:22:22.787 11:45:53 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.787 11:45:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:22.787 11:45:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.787 11:45:53 -- common/autotest_common.sh@10 -- # set +x 00:22:22.787 11:45:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.787 11:45:53 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:22.787 11:45:53 -- nvmf/common.sh@717 -- # local ip 00:22:22.787 11:45:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:22.787 11:45:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:22.787 11:45:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.787 11:45:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.787 11:45:53 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:22.787 11:45:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:22.787 11:45:53 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:22.787 11:45:53 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:22.787 11:45:53 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:22.787 11:45:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.787 11:45:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.787 11:45:53 -- common/autotest_common.sh@10 -- # set +x 00:22:23.046 nvme0n1 00:22:23.046 11:45:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.046 11:45:53 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.046 11:45:53 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:23.046 11:45:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.046 11:45:53 -- common/autotest_common.sh@10 -- # set +x 00:22:23.046 11:45:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.046 11:45:53 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.046 11:45:53 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.046 11:45:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.046 11:45:53 -- common/autotest_common.sh@10 -- # set +x 00:22:23.046 11:45:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.046 11:45:53 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:23.046 11:45:53 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:23.046 11:45:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.046 11:45:53 -- host/auth.sh@44 -- # digest=sha256 00:22:23.046 11:45:53 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:23.046 11:45:53 -- host/auth.sh@44 -- # keyid=1 00:22:23.046 11:45:53 -- host/auth.sh@45 -- # key=DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:23.046 11:45:53 -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:23.046 11:45:53 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.046 11:45:53 -- host/auth.sh@49 -- # echo ffdhe4096 00:22:23.046 11:45:53 -- host/auth.sh@50 -- # echo DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:23.046 11:45:53 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: ]] 00:22:23.046 11:45:53 -- host/auth.sh@51 -- # echo DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:23.046 11:45:53 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 1 00:22:23.047 11:45:53 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:23.047 11:45:53 -- host/auth.sh@70 -- # digest=sha256 00:22:23.047 11:45:53 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:23.047 11:45:53 -- host/auth.sh@70 -- # keyid=1 00:22:23.047 11:45:53 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.047 11:45:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:23.047 11:45:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.047 11:45:53 -- common/autotest_common.sh@10 -- # set +x 00:22:23.047 11:45:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.047 11:45:53 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:23.047 11:45:53 -- nvmf/common.sh@717 -- # local ip 00:22:23.047 11:45:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:23.047 11:45:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:23.047 11:45:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.047 11:45:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.047 11:45:53 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:23.047 11:45:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:23.047 11:45:53 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:23.047 11:45:53 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:23.047 11:45:53 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:23.047 11:45:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.047 11:45:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.047 11:45:53 -- common/autotest_common.sh@10 -- # set +x 00:22:23.305 nvme0n1 00:22:23.306 11:45:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.306 11:45:53 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:23.306 11:45:53 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.306 11:45:53 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.306 11:45:53 -- common/autotest_common.sh@10 -- # set +x 00:22:23.306 11:45:53 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.306 11:45:54 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.306 11:45:54 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.306 11:45:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.306 11:45:54 -- common/autotest_common.sh@10 -- # set +x 00:22:23.306 11:45:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.306 11:45:54 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:23.306 11:45:54 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:23.306 11:45:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.306 11:45:54 -- host/auth.sh@44 -- # digest=sha256 00:22:23.306 11:45:54 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:23.306 11:45:54 -- host/auth.sh@44 -- # keyid=2 00:22:23.306 11:45:54 -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:23.306 11:45:54 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:23.306 11:45:54 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.306 11:45:54 -- host/auth.sh@49 -- # echo ffdhe4096 00:22:23.306 11:45:54 -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:23.306 11:45:54 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: ]] 00:22:23.306 11:45:54 -- host/auth.sh@51 -- # echo DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:23.306 11:45:54 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 2 00:22:23.306 11:45:54 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:23.306 11:45:54 -- host/auth.sh@70 -- # digest=sha256 00:22:23.306 11:45:54 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:23.306 11:45:54 -- host/auth.sh@70 -- # keyid=2 00:22:23.306 11:45:54 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.306 11:45:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:23.306 11:45:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.306 11:45:54 -- common/autotest_common.sh@10 -- # set +x 00:22:23.306 11:45:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.306 11:45:54 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:23.306 11:45:54 -- nvmf/common.sh@717 -- # local ip 00:22:23.306 11:45:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:23.306 11:45:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:23.306 11:45:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.306 11:45:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.306 11:45:54 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:23.306 11:45:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:23.306 11:45:54 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:23.306 11:45:54 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:23.306 11:45:54 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:23.306 11:45:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.306 11:45:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.306 11:45:54 -- common/autotest_common.sh@10 -- # set +x 00:22:23.564 nvme0n1 00:22:23.564 11:45:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.564 11:45:54 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.564 11:45:54 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:23.564 11:45:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.564 11:45:54 -- common/autotest_common.sh@10 -- # set +x 00:22:23.564 11:45:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.823 11:45:54 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.823 11:45:54 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.823 11:45:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.823 11:45:54 -- common/autotest_common.sh@10 -- # set +x 00:22:23.823 11:45:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.823 11:45:54 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:23.823 11:45:54 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:23.823 11:45:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.823 11:45:54 -- host/auth.sh@44 -- # digest=sha256 00:22:23.823 11:45:54 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:23.823 11:45:54 -- host/auth.sh@44 -- # keyid=3 00:22:23.823 11:45:54 -- host/auth.sh@45 -- # key=DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:23.823 11:45:54 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:23.823 11:45:54 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.823 11:45:54 -- host/auth.sh@49 -- # echo ffdhe4096 00:22:23.823 11:45:54 -- host/auth.sh@50 -- # echo DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:23.823 11:45:54 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: ]] 00:22:23.823 11:45:54 -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:23.823 11:45:54 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 3 00:22:23.823 11:45:54 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:23.823 11:45:54 -- host/auth.sh@70 -- # digest=sha256 00:22:23.823 11:45:54 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:23.823 11:45:54 -- host/auth.sh@70 -- # keyid=3 00:22:23.823 11:45:54 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.824 11:45:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:23.824 11:45:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.824 11:45:54 -- common/autotest_common.sh@10 -- # set +x 00:22:23.824 11:45:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.824 11:45:54 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:23.824 11:45:54 -- nvmf/common.sh@717 -- # local ip 00:22:23.824 11:45:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:23.824 11:45:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:23.824 11:45:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.824 11:45:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.824 11:45:54 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:23.824 11:45:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:23.824 11:45:54 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:23.824 11:45:54 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:23.824 11:45:54 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:23.824 11:45:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:23.824 11:45:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.824 11:45:54 -- common/autotest_common.sh@10 -- # set +x 00:22:24.083 nvme0n1 00:22:24.083 11:45:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.083 11:45:54 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.083 11:45:54 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:24.083 11:45:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.083 11:45:54 -- common/autotest_common.sh@10 -- # set +x 00:22:24.083 11:45:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.083 11:45:54 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.083 11:45:54 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.083 11:45:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.083 11:45:54 -- common/autotest_common.sh@10 -- # set +x 00:22:24.083 11:45:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.083 11:45:54 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:24.083 11:45:54 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:24.083 11:45:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.083 11:45:54 -- host/auth.sh@44 -- # digest=sha256 00:22:24.083 11:45:54 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:24.083 11:45:54 -- host/auth.sh@44 -- # keyid=4 00:22:24.083 11:45:54 -- host/auth.sh@45 -- # key=DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:24.083 11:45:54 -- host/auth.sh@46 -- # ckey= 00:22:24.083 11:45:54 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.083 11:45:54 -- host/auth.sh@49 -- # echo ffdhe4096 00:22:24.083 11:45:54 -- host/auth.sh@50 -- # echo DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:24.083 11:45:54 -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:24.083 11:45:54 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 4 00:22:24.083 11:45:54 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:24.083 11:45:54 -- host/auth.sh@70 -- # digest=sha256 00:22:24.083 11:45:54 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:24.083 11:45:54 -- host/auth.sh@70 -- # keyid=4 00:22:24.083 11:45:54 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.083 11:45:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:24.083 11:45:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.083 11:45:54 -- common/autotest_common.sh@10 -- # set +x 00:22:24.083 11:45:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.083 11:45:54 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:24.083 11:45:54 -- nvmf/common.sh@717 -- # local ip 00:22:24.083 11:45:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:24.083 11:45:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:24.083 11:45:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.083 11:45:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.083 11:45:54 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:24.083 11:45:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:24.083 11:45:54 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:24.083 11:45:54 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:24.083 11:45:54 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:24.083 11:45:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:24.083 11:45:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.083 11:45:54 -- common/autotest_common.sh@10 -- # set +x 00:22:24.343 nvme0n1 00:22:24.343 11:45:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.343 11:45:54 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.343 11:45:54 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:24.343 11:45:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.343 11:45:54 -- common/autotest_common.sh@10 -- # set +x 00:22:24.343 11:45:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.343 11:45:54 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.343 11:45:54 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.343 11:45:54 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.343 11:45:54 -- common/autotest_common.sh@10 -- # set +x 00:22:24.343 11:45:54 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.343 11:45:54 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:24.343 11:45:54 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:24.343 11:45:54 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:24.343 11:45:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.343 11:45:55 -- host/auth.sh@44 -- # digest=sha256 00:22:24.343 11:45:55 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:24.343 11:45:55 -- host/auth.sh@44 -- # keyid=0 00:22:24.343 11:45:55 -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:24.343 11:45:55 -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:24.343 11:45:55 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.343 11:45:55 -- host/auth.sh@49 -- # echo ffdhe6144 00:22:24.343 11:45:55 -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:24.343 11:45:55 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: ]] 00:22:24.343 11:45:55 -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:24.343 11:45:55 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 0 00:22:24.343 11:45:55 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:24.343 11:45:55 -- host/auth.sh@70 -- # digest=sha256 00:22:24.343 11:45:55 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:24.343 11:45:55 -- host/auth.sh@70 -- # keyid=0 00:22:24.343 11:45:55 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.343 11:45:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:24.343 11:45:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.343 11:45:55 -- common/autotest_common.sh@10 -- # set +x 00:22:24.343 11:45:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.343 11:45:55 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:24.343 11:45:55 -- nvmf/common.sh@717 -- # local ip 00:22:24.343 11:45:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:24.343 11:45:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:24.343 11:45:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.343 11:45:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.343 11:45:55 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:24.343 11:45:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:24.343 11:45:55 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:24.343 11:45:55 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:24.343 11:45:55 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:24.343 11:45:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.343 11:45:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.343 11:45:55 -- common/autotest_common.sh@10 -- # set +x 00:22:24.913 nvme0n1 00:22:24.913 11:45:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.913 11:45:55 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.913 11:45:55 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:24.913 11:45:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.913 11:45:55 -- common/autotest_common.sh@10 -- # set +x 00:22:24.913 11:45:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.913 11:45:55 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.913 11:45:55 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.913 11:45:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.913 11:45:55 -- common/autotest_common.sh@10 -- # set +x 00:22:24.913 11:45:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.913 11:45:55 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:24.913 11:45:55 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:24.913 11:45:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.913 11:45:55 -- host/auth.sh@44 -- # digest=sha256 00:22:24.913 11:45:55 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:24.913 11:45:55 -- host/auth.sh@44 -- # keyid=1 00:22:24.913 11:45:55 -- host/auth.sh@45 -- # key=DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:24.913 11:45:55 -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:24.913 11:45:55 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.913 11:45:55 -- host/auth.sh@49 -- # echo ffdhe6144 00:22:24.913 11:45:55 -- host/auth.sh@50 -- # echo DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:24.913 11:45:55 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: ]] 00:22:24.913 11:45:55 -- host/auth.sh@51 -- # echo DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:24.913 11:45:55 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 1 00:22:24.913 11:45:55 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:24.913 11:45:55 -- host/auth.sh@70 -- # digest=sha256 00:22:24.913 11:45:55 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:24.913 11:45:55 -- host/auth.sh@70 -- # keyid=1 00:22:24.913 11:45:55 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.913 11:45:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:24.913 11:45:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.913 11:45:55 -- common/autotest_common.sh@10 -- # set +x 00:22:24.913 11:45:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.913 11:45:55 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:24.913 11:45:55 -- nvmf/common.sh@717 -- # local ip 00:22:24.913 11:45:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:24.913 11:45:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:24.913 11:45:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.913 11:45:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.913 11:45:55 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:24.913 11:45:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:24.913 11:45:55 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:24.913 11:45:55 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:24.913 11:45:55 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:24.913 11:45:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.913 11:45:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.913 11:45:55 -- common/autotest_common.sh@10 -- # set +x 00:22:25.173 nvme0n1 00:22:25.173 11:45:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.173 11:45:55 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.173 11:45:55 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:25.173 11:45:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.173 11:45:55 -- common/autotest_common.sh@10 -- # set +x 00:22:25.173 11:45:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.173 11:45:55 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.173 11:45:55 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.173 11:45:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.173 11:45:55 -- common/autotest_common.sh@10 -- # set +x 00:22:25.173 11:45:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.173 11:45:55 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:25.173 11:45:55 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:25.173 11:45:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.173 11:45:55 -- host/auth.sh@44 -- # digest=sha256 00:22:25.173 11:45:55 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:25.173 11:45:55 -- host/auth.sh@44 -- # keyid=2 00:22:25.173 11:45:55 -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:25.173 11:45:55 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:25.173 11:45:55 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:25.173 11:45:55 -- host/auth.sh@49 -- # echo ffdhe6144 00:22:25.173 11:45:55 -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:25.173 11:45:55 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: ]] 00:22:25.173 11:45:55 -- host/auth.sh@51 -- # echo DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:25.173 11:45:55 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 2 00:22:25.173 11:45:55 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:25.173 11:45:55 -- host/auth.sh@70 -- # digest=sha256 00:22:25.173 11:45:55 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:25.173 11:45:55 -- host/auth.sh@70 -- # keyid=2 00:22:25.173 11:45:55 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:25.173 11:45:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:25.173 11:45:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.173 11:45:55 -- common/autotest_common.sh@10 -- # set +x 00:22:25.173 11:45:55 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.173 11:45:55 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:25.173 11:45:55 -- nvmf/common.sh@717 -- # local ip 00:22:25.173 11:45:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:25.173 11:45:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:25.173 11:45:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.173 11:45:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.173 11:45:55 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:25.173 11:45:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:25.173 11:45:55 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:25.173 11:45:55 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:25.173 11:45:55 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:25.173 11:45:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.173 11:45:55 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.173 11:45:55 -- common/autotest_common.sh@10 -- # set +x 00:22:25.743 nvme0n1 00:22:25.743 11:45:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.743 11:45:56 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.743 11:45:56 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:25.743 11:45:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.743 11:45:56 -- common/autotest_common.sh@10 -- # set +x 00:22:25.743 11:45:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.743 11:45:56 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.743 11:45:56 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.743 11:45:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.743 11:45:56 -- common/autotest_common.sh@10 -- # set +x 00:22:25.743 11:45:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.743 11:45:56 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:25.743 11:45:56 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:25.743 11:45:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.743 11:45:56 -- host/auth.sh@44 -- # digest=sha256 00:22:25.743 11:45:56 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:25.743 11:45:56 -- host/auth.sh@44 -- # keyid=3 00:22:25.743 11:45:56 -- host/auth.sh@45 -- # key=DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:25.743 11:45:56 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:25.743 11:45:56 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:25.743 11:45:56 -- host/auth.sh@49 -- # echo ffdhe6144 00:22:25.743 11:45:56 -- host/auth.sh@50 -- # echo DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:25.743 11:45:56 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: ]] 00:22:25.743 11:45:56 -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:25.743 11:45:56 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 3 00:22:25.743 11:45:56 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:25.743 11:45:56 -- host/auth.sh@70 -- # digest=sha256 00:22:25.743 11:45:56 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:25.743 11:45:56 -- host/auth.sh@70 -- # keyid=3 00:22:25.743 11:45:56 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:25.743 11:45:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:25.743 11:45:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.743 11:45:56 -- common/autotest_common.sh@10 -- # set +x 00:22:25.743 11:45:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.743 11:45:56 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:25.743 11:45:56 -- nvmf/common.sh@717 -- # local ip 00:22:25.743 11:45:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:25.743 11:45:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:25.743 11:45:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.743 11:45:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.743 11:45:56 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:25.743 11:45:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:25.743 11:45:56 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:25.743 11:45:56 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:25.743 11:45:56 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:25.743 11:45:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:25.743 11:45:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.743 11:45:56 -- common/autotest_common.sh@10 -- # set +x 00:22:26.002 nvme0n1 00:22:26.002 11:45:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.002 11:45:56 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.002 11:45:56 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:26.002 11:45:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.002 11:45:56 -- common/autotest_common.sh@10 -- # set +x 00:22:26.002 11:45:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.262 11:45:56 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.262 11:45:56 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.262 11:45:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.262 11:45:56 -- common/autotest_common.sh@10 -- # set +x 00:22:26.262 11:45:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.262 11:45:56 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:26.262 11:45:56 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:26.262 11:45:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:26.262 11:45:56 -- host/auth.sh@44 -- # digest=sha256 00:22:26.262 11:45:56 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:26.262 11:45:56 -- host/auth.sh@44 -- # keyid=4 00:22:26.262 11:45:56 -- host/auth.sh@45 -- # key=DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:26.262 11:45:56 -- host/auth.sh@46 -- # ckey= 00:22:26.262 11:45:56 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:26.262 11:45:56 -- host/auth.sh@49 -- # echo ffdhe6144 00:22:26.262 11:45:56 -- host/auth.sh@50 -- # echo DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:26.262 11:45:56 -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:26.262 11:45:56 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 4 00:22:26.262 11:45:56 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:26.262 11:45:56 -- host/auth.sh@70 -- # digest=sha256 00:22:26.262 11:45:56 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:26.262 11:45:56 -- host/auth.sh@70 -- # keyid=4 00:22:26.262 11:45:56 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:26.262 11:45:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:26.262 11:45:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.262 11:45:56 -- common/autotest_common.sh@10 -- # set +x 00:22:26.262 11:45:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.262 11:45:56 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:26.262 11:45:56 -- nvmf/common.sh@717 -- # local ip 00:22:26.262 11:45:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:26.262 11:45:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:26.262 11:45:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.262 11:45:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.262 11:45:56 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:26.262 11:45:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:26.262 11:45:56 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:26.262 11:45:56 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:26.262 11:45:56 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:26.262 11:45:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:26.262 11:45:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.262 11:45:56 -- common/autotest_common.sh@10 -- # set +x 00:22:26.521 nvme0n1 00:22:26.521 11:45:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.521 11:45:57 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.521 11:45:57 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:26.521 11:45:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.521 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:22:26.521 11:45:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.521 11:45:57 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.521 11:45:57 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.521 11:45:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.521 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:22:26.521 11:45:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.521 11:45:57 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:26.521 11:45:57 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:26.521 11:45:57 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:26.521 11:45:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:26.521 11:45:57 -- host/auth.sh@44 -- # digest=sha256 00:22:26.521 11:45:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:26.521 11:45:57 -- host/auth.sh@44 -- # keyid=0 00:22:26.521 11:45:57 -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:26.521 11:45:57 -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:26.521 11:45:57 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:26.521 11:45:57 -- host/auth.sh@49 -- # echo ffdhe8192 00:22:26.521 11:45:57 -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:26.521 11:45:57 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: ]] 00:22:26.521 11:45:57 -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:26.521 11:45:57 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 0 00:22:26.521 11:45:57 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:26.521 11:45:57 -- host/auth.sh@70 -- # digest=sha256 00:22:26.521 11:45:57 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:26.521 11:45:57 -- host/auth.sh@70 -- # keyid=0 00:22:26.521 11:45:57 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:26.521 11:45:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:26.521 11:45:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.521 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:22:26.521 11:45:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.521 11:45:57 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:26.521 11:45:57 -- nvmf/common.sh@717 -- # local ip 00:22:26.521 11:45:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:26.521 11:45:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:26.521 11:45:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.521 11:45:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.521 11:45:57 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:26.521 11:45:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:26.521 11:45:57 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:26.521 11:45:57 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:26.521 11:45:57 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:26.521 11:45:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.521 11:45:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.521 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:22:27.089 nvme0n1 00:22:27.089 11:45:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.089 11:45:57 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.089 11:45:57 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:27.089 11:45:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.089 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:22:27.089 11:45:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.089 11:45:57 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.089 11:45:57 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.089 11:45:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.089 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:22:27.350 11:45:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.350 11:45:57 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:27.350 11:45:57 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:27.350 11:45:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:27.350 11:45:57 -- host/auth.sh@44 -- # digest=sha256 00:22:27.350 11:45:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:27.350 11:45:57 -- host/auth.sh@44 -- # keyid=1 00:22:27.350 11:45:57 -- host/auth.sh@45 -- # key=DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:27.350 11:45:57 -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:27.350 11:45:57 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:27.350 11:45:57 -- host/auth.sh@49 -- # echo ffdhe8192 00:22:27.350 11:45:57 -- host/auth.sh@50 -- # echo DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:27.350 11:45:57 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: ]] 00:22:27.350 11:45:57 -- host/auth.sh@51 -- # echo DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:27.350 11:45:57 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 1 00:22:27.350 11:45:57 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:27.350 11:45:57 -- host/auth.sh@70 -- # digest=sha256 00:22:27.350 11:45:57 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:27.350 11:45:57 -- host/auth.sh@70 -- # keyid=1 00:22:27.350 11:45:57 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:27.350 11:45:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:27.350 11:45:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.350 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:22:27.350 11:45:57 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.350 11:45:57 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:27.350 11:45:57 -- nvmf/common.sh@717 -- # local ip 00:22:27.350 11:45:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:27.350 11:45:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:27.350 11:45:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.350 11:45:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.350 11:45:57 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:27.350 11:45:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:27.350 11:45:57 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:27.350 11:45:57 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:27.350 11:45:57 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:27.350 11:45:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.350 11:45:57 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.350 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:22:27.920 nvme0n1 00:22:27.920 11:45:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.920 11:45:58 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.920 11:45:58 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:27.920 11:45:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.920 11:45:58 -- common/autotest_common.sh@10 -- # set +x 00:22:27.920 11:45:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.920 11:45:58 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.920 11:45:58 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.920 11:45:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.920 11:45:58 -- common/autotest_common.sh@10 -- # set +x 00:22:27.920 11:45:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.920 11:45:58 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:27.920 11:45:58 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:27.920 11:45:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:27.920 11:45:58 -- host/auth.sh@44 -- # digest=sha256 00:22:27.920 11:45:58 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:27.920 11:45:58 -- host/auth.sh@44 -- # keyid=2 00:22:27.920 11:45:58 -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:27.920 11:45:58 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:27.920 11:45:58 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:27.920 11:45:58 -- host/auth.sh@49 -- # echo ffdhe8192 00:22:27.920 11:45:58 -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:27.920 11:45:58 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: ]] 00:22:27.920 11:45:58 -- host/auth.sh@51 -- # echo DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:27.920 11:45:58 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 2 00:22:27.920 11:45:58 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:27.920 11:45:58 -- host/auth.sh@70 -- # digest=sha256 00:22:27.920 11:45:58 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:27.920 11:45:58 -- host/auth.sh@70 -- # keyid=2 00:22:27.920 11:45:58 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:27.920 11:45:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:27.920 11:45:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.920 11:45:58 -- common/autotest_common.sh@10 -- # set +x 00:22:27.920 11:45:58 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.920 11:45:58 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:27.920 11:45:58 -- nvmf/common.sh@717 -- # local ip 00:22:27.920 11:45:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:27.920 11:45:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:27.920 11:45:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.920 11:45:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.920 11:45:58 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:27.920 11:45:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:27.920 11:45:58 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:27.920 11:45:58 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:27.920 11:45:58 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:27.920 11:45:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:27.920 11:45:58 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.920 11:45:58 -- common/autotest_common.sh@10 -- # set +x 00:22:28.488 nvme0n1 00:22:28.488 11:45:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.488 11:45:59 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.488 11:45:59 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:28.488 11:45:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.488 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:28.488 11:45:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.488 11:45:59 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.488 11:45:59 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.488 11:45:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.488 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:28.488 11:45:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.489 11:45:59 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:28.489 11:45:59 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:28.489 11:45:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:28.489 11:45:59 -- host/auth.sh@44 -- # digest=sha256 00:22:28.489 11:45:59 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:28.489 11:45:59 -- host/auth.sh@44 -- # keyid=3 00:22:28.489 11:45:59 -- host/auth.sh@45 -- # key=DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:28.489 11:45:59 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:28.489 11:45:59 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:28.489 11:45:59 -- host/auth.sh@49 -- # echo ffdhe8192 00:22:28.489 11:45:59 -- host/auth.sh@50 -- # echo DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:28.489 11:45:59 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: ]] 00:22:28.489 11:45:59 -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:28.489 11:45:59 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 3 00:22:28.489 11:45:59 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:28.489 11:45:59 -- host/auth.sh@70 -- # digest=sha256 00:22:28.489 11:45:59 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:28.489 11:45:59 -- host/auth.sh@70 -- # keyid=3 00:22:28.489 11:45:59 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:28.489 11:45:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:28.489 11:45:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.489 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:28.489 11:45:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.489 11:45:59 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:28.489 11:45:59 -- nvmf/common.sh@717 -- # local ip 00:22:28.489 11:45:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:28.489 11:45:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:28.489 11:45:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.489 11:45:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.489 11:45:59 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:28.489 11:45:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:28.489 11:45:59 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:28.489 11:45:59 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:28.489 11:45:59 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:28.489 11:45:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:28.489 11:45:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.489 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.057 nvme0n1 00:22:29.057 11:45:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.057 11:45:59 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.057 11:45:59 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:29.057 11:45:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.057 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.057 11:45:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.057 11:45:59 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.057 11:45:59 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.057 11:45:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.057 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.057 11:45:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.057 11:45:59 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:29.057 11:45:59 -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:29.057 11:45:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.057 11:45:59 -- host/auth.sh@44 -- # digest=sha256 00:22:29.057 11:45:59 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:29.057 11:45:59 -- host/auth.sh@44 -- # keyid=4 00:22:29.057 11:45:59 -- host/auth.sh@45 -- # key=DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:29.057 11:45:59 -- host/auth.sh@46 -- # ckey= 00:22:29.057 11:45:59 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.057 11:45:59 -- host/auth.sh@49 -- # echo ffdhe8192 00:22:29.057 11:45:59 -- host/auth.sh@50 -- # echo DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:29.057 11:45:59 -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:29.057 11:45:59 -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 4 00:22:29.057 11:45:59 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:29.057 11:45:59 -- host/auth.sh@70 -- # digest=sha256 00:22:29.057 11:45:59 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:29.057 11:45:59 -- host/auth.sh@70 -- # keyid=4 00:22:29.057 11:45:59 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.057 11:45:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:29.057 11:45:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.057 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.057 11:45:59 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.057 11:45:59 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:29.057 11:45:59 -- nvmf/common.sh@717 -- # local ip 00:22:29.057 11:45:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:29.057 11:45:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:29.057 11:45:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.057 11:45:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.057 11:45:59 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:29.057 11:45:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:29.057 11:45:59 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:29.057 11:45:59 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:29.057 11:45:59 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:29.057 11:45:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:29.057 11:45:59 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.057 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.624 nvme0n1 00:22:29.624 11:46:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.624 11:46:00 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.624 11:46:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.624 11:46:00 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:29.624 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:22:29.624 11:46:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.883 11:46:00 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.883 11:46:00 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.883 11:46:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.883 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:22:29.883 11:46:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.883 11:46:00 -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:22:29.883 11:46:00 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:29.883 11:46:00 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:29.883 11:46:00 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:29.883 11:46:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.883 11:46:00 -- host/auth.sh@44 -- # digest=sha384 00:22:29.883 11:46:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:29.883 11:46:00 -- host/auth.sh@44 -- # keyid=0 00:22:29.883 11:46:00 -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:29.883 11:46:00 -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:29.883 11:46:00 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:29.883 11:46:00 -- host/auth.sh@49 -- # echo ffdhe2048 00:22:29.883 11:46:00 -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:29.883 11:46:00 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: ]] 00:22:29.883 11:46:00 -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:29.883 11:46:00 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 0 00:22:29.883 11:46:00 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:29.883 11:46:00 -- host/auth.sh@70 -- # digest=sha384 00:22:29.883 11:46:00 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:29.883 11:46:00 -- host/auth.sh@70 -- # keyid=0 00:22:29.883 11:46:00 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.883 11:46:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:29.883 11:46:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.883 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:22:29.883 11:46:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.883 11:46:00 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:29.883 11:46:00 -- nvmf/common.sh@717 -- # local ip 00:22:29.883 11:46:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:29.883 11:46:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:29.883 11:46:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.883 11:46:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.883 11:46:00 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:29.883 11:46:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:29.883 11:46:00 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:29.883 11:46:00 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:29.883 11:46:00 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:29.883 11:46:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.883 11:46:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.883 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:22:29.883 nvme0n1 00:22:29.883 11:46:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.883 11:46:00 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.883 11:46:00 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:29.883 11:46:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.883 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:22:29.883 11:46:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.143 11:46:00 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.143 11:46:00 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.143 11:46:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.143 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.143 11:46:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.143 11:46:00 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:30.143 11:46:00 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:30.143 11:46:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.143 11:46:00 -- host/auth.sh@44 -- # digest=sha384 00:22:30.143 11:46:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:30.143 11:46:00 -- host/auth.sh@44 -- # keyid=1 00:22:30.143 11:46:00 -- host/auth.sh@45 -- # key=DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:30.143 11:46:00 -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:30.143 11:46:00 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:30.143 11:46:00 -- host/auth.sh@49 -- # echo ffdhe2048 00:22:30.143 11:46:00 -- host/auth.sh@50 -- # echo DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:30.143 11:46:00 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: ]] 00:22:30.143 11:46:00 -- host/auth.sh@51 -- # echo DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:30.143 11:46:00 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 1 00:22:30.143 11:46:00 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:30.143 11:46:00 -- host/auth.sh@70 -- # digest=sha384 00:22:30.143 11:46:00 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:30.143 11:46:00 -- host/auth.sh@70 -- # keyid=1 00:22:30.143 11:46:00 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:30.143 11:46:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:30.143 11:46:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.143 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.143 11:46:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.143 11:46:00 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:30.143 11:46:00 -- nvmf/common.sh@717 -- # local ip 00:22:30.143 11:46:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:30.143 11:46:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:30.143 11:46:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.143 11:46:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.143 11:46:00 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:30.143 11:46:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:30.143 11:46:00 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:30.143 11:46:00 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:30.143 11:46:00 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:30.143 11:46:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.143 11:46:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.143 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.143 nvme0n1 00:22:30.143 11:46:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.143 11:46:00 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.143 11:46:00 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:30.143 11:46:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.143 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.143 11:46:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.402 11:46:00 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.402 11:46:00 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.402 11:46:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.402 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.402 11:46:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.402 11:46:00 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:30.402 11:46:00 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:30.402 11:46:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.402 11:46:00 -- host/auth.sh@44 -- # digest=sha384 00:22:30.402 11:46:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:30.402 11:46:00 -- host/auth.sh@44 -- # keyid=2 00:22:30.402 11:46:00 -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:30.402 11:46:00 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:30.402 11:46:00 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:30.402 11:46:00 -- host/auth.sh@49 -- # echo ffdhe2048 00:22:30.402 11:46:00 -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:30.402 11:46:00 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: ]] 00:22:30.402 11:46:00 -- host/auth.sh@51 -- # echo DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:30.402 11:46:00 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 2 00:22:30.402 11:46:00 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:30.402 11:46:00 -- host/auth.sh@70 -- # digest=sha384 00:22:30.402 11:46:00 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:30.402 11:46:00 -- host/auth.sh@70 -- # keyid=2 00:22:30.402 11:46:00 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:30.402 11:46:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:30.402 11:46:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.402 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.402 11:46:00 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.402 11:46:00 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:30.402 11:46:00 -- nvmf/common.sh@717 -- # local ip 00:22:30.402 11:46:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:30.402 11:46:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:30.402 11:46:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.402 11:46:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.402 11:46:00 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:30.402 11:46:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:30.402 11:46:00 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:30.402 11:46:00 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:30.402 11:46:00 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:30.403 11:46:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.403 11:46:00 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.403 11:46:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.403 nvme0n1 00:22:30.403 11:46:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.403 11:46:01 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.403 11:46:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.403 11:46:01 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:30.403 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:30.403 11:46:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.661 11:46:01 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.661 11:46:01 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.661 11:46:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.661 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:30.661 11:46:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.661 11:46:01 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:30.661 11:46:01 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:30.661 11:46:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.661 11:46:01 -- host/auth.sh@44 -- # digest=sha384 00:22:30.661 11:46:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:30.661 11:46:01 -- host/auth.sh@44 -- # keyid=3 00:22:30.661 11:46:01 -- host/auth.sh@45 -- # key=DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:30.661 11:46:01 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:30.662 11:46:01 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:30.662 11:46:01 -- host/auth.sh@49 -- # echo ffdhe2048 00:22:30.662 11:46:01 -- host/auth.sh@50 -- # echo DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:30.662 11:46:01 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: ]] 00:22:30.662 11:46:01 -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:30.662 11:46:01 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 3 00:22:30.662 11:46:01 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:30.662 11:46:01 -- host/auth.sh@70 -- # digest=sha384 00:22:30.662 11:46:01 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:30.662 11:46:01 -- host/auth.sh@70 -- # keyid=3 00:22:30.662 11:46:01 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:30.662 11:46:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:30.662 11:46:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.662 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:30.662 11:46:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.662 11:46:01 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:30.662 11:46:01 -- nvmf/common.sh@717 -- # local ip 00:22:30.662 11:46:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:30.662 11:46:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:30.662 11:46:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.662 11:46:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.662 11:46:01 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:30.662 11:46:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:30.662 11:46:01 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:30.662 11:46:01 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:30.662 11:46:01 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:30.662 11:46:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:30.662 11:46:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.662 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:30.662 nvme0n1 00:22:30.662 11:46:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.662 11:46:01 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.662 11:46:01 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:30.662 11:46:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.662 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:30.662 11:46:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.920 11:46:01 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.920 11:46:01 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.920 11:46:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.920 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:30.920 11:46:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.920 11:46:01 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:30.920 11:46:01 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:30.920 11:46:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.920 11:46:01 -- host/auth.sh@44 -- # digest=sha384 00:22:30.920 11:46:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:30.920 11:46:01 -- host/auth.sh@44 -- # keyid=4 00:22:30.920 11:46:01 -- host/auth.sh@45 -- # key=DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:30.920 11:46:01 -- host/auth.sh@46 -- # ckey= 00:22:30.920 11:46:01 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:30.920 11:46:01 -- host/auth.sh@49 -- # echo ffdhe2048 00:22:30.920 11:46:01 -- host/auth.sh@50 -- # echo DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:30.920 11:46:01 -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:30.920 11:46:01 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 4 00:22:30.921 11:46:01 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:30.921 11:46:01 -- host/auth.sh@70 -- # digest=sha384 00:22:30.921 11:46:01 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:30.921 11:46:01 -- host/auth.sh@70 -- # keyid=4 00:22:30.921 11:46:01 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:30.921 11:46:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:30.921 11:46:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.921 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:30.921 11:46:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.921 11:46:01 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:30.921 11:46:01 -- nvmf/common.sh@717 -- # local ip 00:22:30.921 11:46:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:30.921 11:46:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:30.921 11:46:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.921 11:46:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.921 11:46:01 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:30.921 11:46:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:30.921 11:46:01 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:30.921 11:46:01 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:30.921 11:46:01 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:30.921 11:46:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:30.921 11:46:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.921 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:30.921 nvme0n1 00:22:30.921 11:46:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.921 11:46:01 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.921 11:46:01 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:30.921 11:46:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.921 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:30.921 11:46:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.179 11:46:01 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.179 11:46:01 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.179 11:46:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.179 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:31.179 11:46:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.179 11:46:01 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:31.179 11:46:01 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:31.179 11:46:01 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:31.179 11:46:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.179 11:46:01 -- host/auth.sh@44 -- # digest=sha384 00:22:31.179 11:46:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:31.179 11:46:01 -- host/auth.sh@44 -- # keyid=0 00:22:31.179 11:46:01 -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:31.179 11:46:01 -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:31.179 11:46:01 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:31.179 11:46:01 -- host/auth.sh@49 -- # echo ffdhe3072 00:22:31.179 11:46:01 -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:31.179 11:46:01 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: ]] 00:22:31.179 11:46:01 -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:31.179 11:46:01 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 0 00:22:31.179 11:46:01 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:31.179 11:46:01 -- host/auth.sh@70 -- # digest=sha384 00:22:31.179 11:46:01 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:31.179 11:46:01 -- host/auth.sh@70 -- # keyid=0 00:22:31.179 11:46:01 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.179 11:46:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:31.179 11:46:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.179 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:31.179 11:46:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.179 11:46:01 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:31.179 11:46:01 -- nvmf/common.sh@717 -- # local ip 00:22:31.179 11:46:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:31.179 11:46:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:31.179 11:46:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.179 11:46:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.179 11:46:01 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:31.179 11:46:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:31.179 11:46:01 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:31.179 11:46:01 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:31.179 11:46:01 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:31.179 11:46:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.179 11:46:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.179 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:31.179 nvme0n1 00:22:31.179 11:46:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.179 11:46:01 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.179 11:46:01 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:31.179 11:46:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.179 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:31.438 11:46:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.438 11:46:01 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.438 11:46:01 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.438 11:46:01 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.438 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:31.438 11:46:01 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.438 11:46:01 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:31.438 11:46:01 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:31.438 11:46:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.438 11:46:01 -- host/auth.sh@44 -- # digest=sha384 00:22:31.438 11:46:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:31.438 11:46:02 -- host/auth.sh@44 -- # keyid=1 00:22:31.438 11:46:02 -- host/auth.sh@45 -- # key=DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:31.438 11:46:02 -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:31.438 11:46:02 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:31.438 11:46:02 -- host/auth.sh@49 -- # echo ffdhe3072 00:22:31.438 11:46:02 -- host/auth.sh@50 -- # echo DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:31.438 11:46:02 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: ]] 00:22:31.438 11:46:02 -- host/auth.sh@51 -- # echo DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:31.438 11:46:02 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 1 00:22:31.438 11:46:02 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:31.438 11:46:02 -- host/auth.sh@70 -- # digest=sha384 00:22:31.438 11:46:02 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:31.438 11:46:02 -- host/auth.sh@70 -- # keyid=1 00:22:31.438 11:46:02 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.438 11:46:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:31.438 11:46:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.438 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:31.438 11:46:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.438 11:46:02 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:31.438 11:46:02 -- nvmf/common.sh@717 -- # local ip 00:22:31.438 11:46:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:31.438 11:46:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:31.438 11:46:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.438 11:46:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.438 11:46:02 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:31.438 11:46:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:31.438 11:46:02 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:31.438 11:46:02 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:31.438 11:46:02 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:31.438 11:46:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.438 11:46:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.438 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:31.696 nvme0n1 00:22:31.696 11:46:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.696 11:46:02 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.696 11:46:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.696 11:46:02 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:31.696 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:31.696 11:46:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.696 11:46:02 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.696 11:46:02 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.696 11:46:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.696 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:31.696 11:46:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.696 11:46:02 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:31.696 11:46:02 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:31.696 11:46:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.696 11:46:02 -- host/auth.sh@44 -- # digest=sha384 00:22:31.696 11:46:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:31.696 11:46:02 -- host/auth.sh@44 -- # keyid=2 00:22:31.696 11:46:02 -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:31.696 11:46:02 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:31.696 11:46:02 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:31.696 11:46:02 -- host/auth.sh@49 -- # echo ffdhe3072 00:22:31.696 11:46:02 -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:31.696 11:46:02 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: ]] 00:22:31.696 11:46:02 -- host/auth.sh@51 -- # echo DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:31.696 11:46:02 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 2 00:22:31.696 11:46:02 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:31.696 11:46:02 -- host/auth.sh@70 -- # digest=sha384 00:22:31.696 11:46:02 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:31.696 11:46:02 -- host/auth.sh@70 -- # keyid=2 00:22:31.697 11:46:02 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.697 11:46:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:31.697 11:46:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.697 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:31.697 11:46:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.697 11:46:02 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:31.697 11:46:02 -- nvmf/common.sh@717 -- # local ip 00:22:31.697 11:46:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:31.697 11:46:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:31.697 11:46:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.697 11:46:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.697 11:46:02 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:31.697 11:46:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:31.697 11:46:02 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:31.697 11:46:02 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:31.697 11:46:02 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:31.697 11:46:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.697 11:46:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.697 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:31.996 nvme0n1 00:22:31.996 11:46:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.996 11:46:02 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.996 11:46:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.996 11:46:02 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:31.996 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:31.996 11:46:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.996 11:46:02 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.996 11:46:02 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.996 11:46:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.996 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:31.996 11:46:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.996 11:46:02 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:31.996 11:46:02 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:31.996 11:46:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.996 11:46:02 -- host/auth.sh@44 -- # digest=sha384 00:22:31.996 11:46:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:31.996 11:46:02 -- host/auth.sh@44 -- # keyid=3 00:22:31.996 11:46:02 -- host/auth.sh@45 -- # key=DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:31.996 11:46:02 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:31.996 11:46:02 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:31.996 11:46:02 -- host/auth.sh@49 -- # echo ffdhe3072 00:22:31.996 11:46:02 -- host/auth.sh@50 -- # echo DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:31.996 11:46:02 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: ]] 00:22:31.996 11:46:02 -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:31.996 11:46:02 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 3 00:22:31.996 11:46:02 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:31.996 11:46:02 -- host/auth.sh@70 -- # digest=sha384 00:22:31.996 11:46:02 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:31.996 11:46:02 -- host/auth.sh@70 -- # keyid=3 00:22:31.996 11:46:02 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.996 11:46:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:31.996 11:46:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.996 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:31.996 11:46:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.997 11:46:02 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:31.997 11:46:02 -- nvmf/common.sh@717 -- # local ip 00:22:31.997 11:46:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:31.997 11:46:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:31.997 11:46:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.997 11:46:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.997 11:46:02 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:31.997 11:46:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:31.997 11:46:02 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:31.997 11:46:02 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:31.997 11:46:02 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:31.997 11:46:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:31.997 11:46:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.997 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:32.286 nvme0n1 00:22:32.286 11:46:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.286 11:46:02 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.286 11:46:02 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:32.286 11:46:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.286 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:32.286 11:46:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.286 11:46:02 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.286 11:46:02 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.286 11:46:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.286 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:32.286 11:46:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.286 11:46:02 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:32.286 11:46:02 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:32.286 11:46:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.286 11:46:02 -- host/auth.sh@44 -- # digest=sha384 00:22:32.286 11:46:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:32.286 11:46:02 -- host/auth.sh@44 -- # keyid=4 00:22:32.286 11:46:02 -- host/auth.sh@45 -- # key=DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:32.286 11:46:02 -- host/auth.sh@46 -- # ckey= 00:22:32.286 11:46:02 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:32.286 11:46:02 -- host/auth.sh@49 -- # echo ffdhe3072 00:22:32.286 11:46:02 -- host/auth.sh@50 -- # echo DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:32.286 11:46:02 -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:32.286 11:46:02 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 4 00:22:32.286 11:46:02 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:32.286 11:46:02 -- host/auth.sh@70 -- # digest=sha384 00:22:32.286 11:46:02 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:32.286 11:46:02 -- host/auth.sh@70 -- # keyid=4 00:22:32.286 11:46:02 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.286 11:46:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:32.286 11:46:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.286 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:32.286 11:46:02 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.286 11:46:02 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:32.286 11:46:02 -- nvmf/common.sh@717 -- # local ip 00:22:32.286 11:46:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:32.286 11:46:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:32.286 11:46:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.286 11:46:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.286 11:46:02 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:32.286 11:46:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:32.286 11:46:02 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:32.286 11:46:02 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:32.286 11:46:02 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:32.286 11:46:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:32.286 11:46:02 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.286 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:32.546 nvme0n1 00:22:32.546 11:46:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.546 11:46:03 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.546 11:46:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.546 11:46:03 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:32.546 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:32.546 11:46:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.546 11:46:03 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.546 11:46:03 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.546 11:46:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.546 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:32.546 11:46:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.546 11:46:03 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:32.546 11:46:03 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:32.546 11:46:03 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:32.546 11:46:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.546 11:46:03 -- host/auth.sh@44 -- # digest=sha384 00:22:32.546 11:46:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:32.546 11:46:03 -- host/auth.sh@44 -- # keyid=0 00:22:32.546 11:46:03 -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:32.546 11:46:03 -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:32.546 11:46:03 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:32.546 11:46:03 -- host/auth.sh@49 -- # echo ffdhe4096 00:22:32.546 11:46:03 -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:32.546 11:46:03 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: ]] 00:22:32.546 11:46:03 -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:32.546 11:46:03 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 0 00:22:32.546 11:46:03 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:32.546 11:46:03 -- host/auth.sh@70 -- # digest=sha384 00:22:32.546 11:46:03 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:32.546 11:46:03 -- host/auth.sh@70 -- # keyid=0 00:22:32.546 11:46:03 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.546 11:46:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:32.546 11:46:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.546 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:32.546 11:46:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.546 11:46:03 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:32.546 11:46:03 -- nvmf/common.sh@717 -- # local ip 00:22:32.546 11:46:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:32.546 11:46:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:32.546 11:46:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.546 11:46:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.546 11:46:03 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:32.546 11:46:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:32.546 11:46:03 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:32.546 11:46:03 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:32.546 11:46:03 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:32.546 11:46:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.546 11:46:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.546 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:32.804 nvme0n1 00:22:32.804 11:46:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.804 11:46:03 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.804 11:46:03 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:32.804 11:46:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.804 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:32.804 11:46:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.804 11:46:03 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.804 11:46:03 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.804 11:46:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.804 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:32.804 11:46:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.804 11:46:03 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:32.804 11:46:03 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:32.804 11:46:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.804 11:46:03 -- host/auth.sh@44 -- # digest=sha384 00:22:32.804 11:46:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:32.804 11:46:03 -- host/auth.sh@44 -- # keyid=1 00:22:32.804 11:46:03 -- host/auth.sh@45 -- # key=DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:32.804 11:46:03 -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:32.804 11:46:03 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:32.804 11:46:03 -- host/auth.sh@49 -- # echo ffdhe4096 00:22:32.804 11:46:03 -- host/auth.sh@50 -- # echo DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:32.804 11:46:03 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: ]] 00:22:32.804 11:46:03 -- host/auth.sh@51 -- # echo DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:32.804 11:46:03 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 1 00:22:32.804 11:46:03 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:32.804 11:46:03 -- host/auth.sh@70 -- # digest=sha384 00:22:32.804 11:46:03 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:32.804 11:46:03 -- host/auth.sh@70 -- # keyid=1 00:22:32.804 11:46:03 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.805 11:46:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:32.805 11:46:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.805 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:32.805 11:46:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.805 11:46:03 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:32.805 11:46:03 -- nvmf/common.sh@717 -- # local ip 00:22:32.805 11:46:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:32.805 11:46:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:32.805 11:46:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.805 11:46:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.805 11:46:03 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:32.805 11:46:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:32.805 11:46:03 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:32.805 11:46:03 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:32.805 11:46:03 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:32.805 11:46:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.805 11:46:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.805 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:33.063 nvme0n1 00:22:33.063 11:46:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.063 11:46:03 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.063 11:46:03 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:33.063 11:46:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.063 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:33.063 11:46:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.063 11:46:03 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.063 11:46:03 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.063 11:46:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.063 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:33.063 11:46:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.063 11:46:03 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:33.063 11:46:03 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:33.063 11:46:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.063 11:46:03 -- host/auth.sh@44 -- # digest=sha384 00:22:33.063 11:46:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:33.063 11:46:03 -- host/auth.sh@44 -- # keyid=2 00:22:33.063 11:46:03 -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:33.063 11:46:03 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:33.063 11:46:03 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.063 11:46:03 -- host/auth.sh@49 -- # echo ffdhe4096 00:22:33.063 11:46:03 -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:33.063 11:46:03 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: ]] 00:22:33.063 11:46:03 -- host/auth.sh@51 -- # echo DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:33.063 11:46:03 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 2 00:22:33.063 11:46:03 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:33.063 11:46:03 -- host/auth.sh@70 -- # digest=sha384 00:22:33.063 11:46:03 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:33.063 11:46:03 -- host/auth.sh@70 -- # keyid=2 00:22:33.063 11:46:03 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.063 11:46:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:33.063 11:46:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.063 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:33.063 11:46:03 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.063 11:46:03 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:33.063 11:46:03 -- nvmf/common.sh@717 -- # local ip 00:22:33.063 11:46:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:33.063 11:46:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:33.063 11:46:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.063 11:46:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.063 11:46:03 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:33.063 11:46:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:33.063 11:46:03 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:33.063 11:46:03 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:33.063 11:46:03 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:33.063 11:46:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.063 11:46:03 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.063 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:33.322 nvme0n1 00:22:33.322 11:46:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.322 11:46:04 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.322 11:46:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.322 11:46:04 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:33.322 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:22:33.322 11:46:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.322 11:46:04 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.322 11:46:04 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.322 11:46:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.322 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:22:33.580 11:46:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.580 11:46:04 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:33.580 11:46:04 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:33.580 11:46:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.580 11:46:04 -- host/auth.sh@44 -- # digest=sha384 00:22:33.580 11:46:04 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:33.580 11:46:04 -- host/auth.sh@44 -- # keyid=3 00:22:33.580 11:46:04 -- host/auth.sh@45 -- # key=DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:33.580 11:46:04 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:33.580 11:46:04 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.580 11:46:04 -- host/auth.sh@49 -- # echo ffdhe4096 00:22:33.581 11:46:04 -- host/auth.sh@50 -- # echo DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:33.581 11:46:04 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: ]] 00:22:33.581 11:46:04 -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:33.581 11:46:04 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 3 00:22:33.581 11:46:04 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:33.581 11:46:04 -- host/auth.sh@70 -- # digest=sha384 00:22:33.581 11:46:04 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:33.581 11:46:04 -- host/auth.sh@70 -- # keyid=3 00:22:33.581 11:46:04 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.581 11:46:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:33.581 11:46:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.581 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:22:33.581 11:46:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.581 11:46:04 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:33.581 11:46:04 -- nvmf/common.sh@717 -- # local ip 00:22:33.581 11:46:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:33.581 11:46:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:33.581 11:46:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.581 11:46:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.581 11:46:04 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:33.581 11:46:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:33.581 11:46:04 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:33.581 11:46:04 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:33.581 11:46:04 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:33.581 11:46:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:33.581 11:46:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.581 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:22:33.839 nvme0n1 00:22:33.839 11:46:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.839 11:46:04 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.839 11:46:04 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:33.839 11:46:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.839 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:22:33.839 11:46:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.839 11:46:04 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.839 11:46:04 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.839 11:46:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.839 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:22:33.839 11:46:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.839 11:46:04 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:33.839 11:46:04 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:33.839 11:46:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.840 11:46:04 -- host/auth.sh@44 -- # digest=sha384 00:22:33.840 11:46:04 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:33.840 11:46:04 -- host/auth.sh@44 -- # keyid=4 00:22:33.840 11:46:04 -- host/auth.sh@45 -- # key=DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:33.840 11:46:04 -- host/auth.sh@46 -- # ckey= 00:22:33.840 11:46:04 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.840 11:46:04 -- host/auth.sh@49 -- # echo ffdhe4096 00:22:33.840 11:46:04 -- host/auth.sh@50 -- # echo DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:33.840 11:46:04 -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:33.840 11:46:04 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 4 00:22:33.840 11:46:04 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:33.840 11:46:04 -- host/auth.sh@70 -- # digest=sha384 00:22:33.840 11:46:04 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:33.840 11:46:04 -- host/auth.sh@70 -- # keyid=4 00:22:33.840 11:46:04 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.840 11:46:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:33.840 11:46:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.840 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:22:33.840 11:46:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.840 11:46:04 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:33.840 11:46:04 -- nvmf/common.sh@717 -- # local ip 00:22:33.840 11:46:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:33.840 11:46:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:33.840 11:46:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.840 11:46:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.840 11:46:04 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:33.840 11:46:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:33.840 11:46:04 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:33.840 11:46:04 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:33.840 11:46:04 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:33.840 11:46:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:33.840 11:46:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.840 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:22:34.099 nvme0n1 00:22:34.099 11:46:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.099 11:46:04 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.099 11:46:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.099 11:46:04 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:34.099 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:22:34.099 11:46:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.099 11:46:04 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.099 11:46:04 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.099 11:46:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.099 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:22:34.099 11:46:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.099 11:46:04 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.099 11:46:04 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:34.099 11:46:04 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:34.099 11:46:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.099 11:46:04 -- host/auth.sh@44 -- # digest=sha384 00:22:34.099 11:46:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:34.099 11:46:04 -- host/auth.sh@44 -- # keyid=0 00:22:34.099 11:46:04 -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:34.099 11:46:04 -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:34.099 11:46:04 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.099 11:46:04 -- host/auth.sh@49 -- # echo ffdhe6144 00:22:34.099 11:46:04 -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:34.099 11:46:04 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: ]] 00:22:34.099 11:46:04 -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:34.099 11:46:04 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 0 00:22:34.099 11:46:04 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:34.099 11:46:04 -- host/auth.sh@70 -- # digest=sha384 00:22:34.099 11:46:04 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:34.099 11:46:04 -- host/auth.sh@70 -- # keyid=0 00:22:34.099 11:46:04 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.099 11:46:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:34.099 11:46:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.099 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:22:34.099 11:46:04 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.099 11:46:04 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:34.099 11:46:04 -- nvmf/common.sh@717 -- # local ip 00:22:34.099 11:46:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:34.099 11:46:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:34.099 11:46:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.099 11:46:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.099 11:46:04 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:34.099 11:46:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:34.099 11:46:04 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:34.099 11:46:04 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:34.099 11:46:04 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:34.099 11:46:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.099 11:46:04 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.099 11:46:04 -- common/autotest_common.sh@10 -- # set +x 00:22:34.358 nvme0n1 00:22:34.358 11:46:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.617 11:46:05 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.617 11:46:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.617 11:46:05 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:34.617 11:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:34.617 11:46:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.617 11:46:05 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.617 11:46:05 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.617 11:46:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.617 11:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:34.617 11:46:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.617 11:46:05 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:34.617 11:46:05 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:34.617 11:46:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.617 11:46:05 -- host/auth.sh@44 -- # digest=sha384 00:22:34.617 11:46:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:34.617 11:46:05 -- host/auth.sh@44 -- # keyid=1 00:22:34.617 11:46:05 -- host/auth.sh@45 -- # key=DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:34.617 11:46:05 -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:34.617 11:46:05 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.617 11:46:05 -- host/auth.sh@49 -- # echo ffdhe6144 00:22:34.617 11:46:05 -- host/auth.sh@50 -- # echo DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:34.617 11:46:05 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: ]] 00:22:34.617 11:46:05 -- host/auth.sh@51 -- # echo DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:34.617 11:46:05 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 1 00:22:34.617 11:46:05 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:34.617 11:46:05 -- host/auth.sh@70 -- # digest=sha384 00:22:34.617 11:46:05 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:34.617 11:46:05 -- host/auth.sh@70 -- # keyid=1 00:22:34.617 11:46:05 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.617 11:46:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:34.617 11:46:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.617 11:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:34.617 11:46:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.617 11:46:05 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:34.617 11:46:05 -- nvmf/common.sh@717 -- # local ip 00:22:34.617 11:46:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:34.617 11:46:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:34.617 11:46:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.617 11:46:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.617 11:46:05 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:34.617 11:46:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:34.617 11:46:05 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:34.617 11:46:05 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:34.617 11:46:05 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:34.617 11:46:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.617 11:46:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.617 11:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:34.876 nvme0n1 00:22:34.876 11:46:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.876 11:46:05 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.876 11:46:05 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:34.876 11:46:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.876 11:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:34.876 11:46:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.876 11:46:05 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.876 11:46:05 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.876 11:46:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.876 11:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:34.876 11:46:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.876 11:46:05 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:34.876 11:46:05 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:34.876 11:46:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.876 11:46:05 -- host/auth.sh@44 -- # digest=sha384 00:22:34.876 11:46:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:34.876 11:46:05 -- host/auth.sh@44 -- # keyid=2 00:22:34.876 11:46:05 -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:34.876 11:46:05 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:34.876 11:46:05 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.876 11:46:05 -- host/auth.sh@49 -- # echo ffdhe6144 00:22:34.876 11:46:05 -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:34.876 11:46:05 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: ]] 00:22:34.876 11:46:05 -- host/auth.sh@51 -- # echo DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:34.876 11:46:05 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 2 00:22:34.876 11:46:05 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:34.876 11:46:05 -- host/auth.sh@70 -- # digest=sha384 00:22:34.876 11:46:05 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:34.876 11:46:05 -- host/auth.sh@70 -- # keyid=2 00:22:34.876 11:46:05 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.876 11:46:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:34.876 11:46:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.876 11:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:34.876 11:46:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.876 11:46:05 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:34.876 11:46:05 -- nvmf/common.sh@717 -- # local ip 00:22:34.876 11:46:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:34.876 11:46:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:34.876 11:46:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.876 11:46:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.876 11:46:05 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:34.876 11:46:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:34.876 11:46:05 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:34.876 11:46:05 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:34.876 11:46:05 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:34.877 11:46:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.877 11:46:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.877 11:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:35.444 nvme0n1 00:22:35.444 11:46:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.444 11:46:06 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.444 11:46:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.444 11:46:06 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:35.444 11:46:06 -- common/autotest_common.sh@10 -- # set +x 00:22:35.444 11:46:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.444 11:46:06 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.444 11:46:06 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.444 11:46:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.444 11:46:06 -- common/autotest_common.sh@10 -- # set +x 00:22:35.444 11:46:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.444 11:46:06 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:35.444 11:46:06 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:35.444 11:46:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.444 11:46:06 -- host/auth.sh@44 -- # digest=sha384 00:22:35.444 11:46:06 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:35.444 11:46:06 -- host/auth.sh@44 -- # keyid=3 00:22:35.444 11:46:06 -- host/auth.sh@45 -- # key=DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:35.444 11:46:06 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:35.444 11:46:06 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:35.444 11:46:06 -- host/auth.sh@49 -- # echo ffdhe6144 00:22:35.444 11:46:06 -- host/auth.sh@50 -- # echo DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:35.444 11:46:06 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: ]] 00:22:35.444 11:46:06 -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:35.444 11:46:06 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 3 00:22:35.444 11:46:06 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:35.444 11:46:06 -- host/auth.sh@70 -- # digest=sha384 00:22:35.444 11:46:06 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:35.444 11:46:06 -- host/auth.sh@70 -- # keyid=3 00:22:35.444 11:46:06 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.444 11:46:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:35.444 11:46:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.444 11:46:06 -- common/autotest_common.sh@10 -- # set +x 00:22:35.444 11:46:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.444 11:46:06 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:35.444 11:46:06 -- nvmf/common.sh@717 -- # local ip 00:22:35.444 11:46:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:35.444 11:46:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:35.444 11:46:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.444 11:46:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.444 11:46:06 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:35.444 11:46:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:35.444 11:46:06 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:35.444 11:46:06 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:35.444 11:46:06 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:35.444 11:46:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:35.444 11:46:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.444 11:46:06 -- common/autotest_common.sh@10 -- # set +x 00:22:35.703 nvme0n1 00:22:35.703 11:46:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.703 11:46:06 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.703 11:46:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.703 11:46:06 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:35.703 11:46:06 -- common/autotest_common.sh@10 -- # set +x 00:22:35.703 11:46:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.962 11:46:06 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.962 11:46:06 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.962 11:46:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.962 11:46:06 -- common/autotest_common.sh@10 -- # set +x 00:22:35.962 11:46:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.962 11:46:06 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:35.962 11:46:06 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:35.962 11:46:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.962 11:46:06 -- host/auth.sh@44 -- # digest=sha384 00:22:35.962 11:46:06 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:35.962 11:46:06 -- host/auth.sh@44 -- # keyid=4 00:22:35.962 11:46:06 -- host/auth.sh@45 -- # key=DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:35.962 11:46:06 -- host/auth.sh@46 -- # ckey= 00:22:35.962 11:46:06 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:35.962 11:46:06 -- host/auth.sh@49 -- # echo ffdhe6144 00:22:35.962 11:46:06 -- host/auth.sh@50 -- # echo DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:35.962 11:46:06 -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:35.962 11:46:06 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 4 00:22:35.962 11:46:06 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:35.962 11:46:06 -- host/auth.sh@70 -- # digest=sha384 00:22:35.962 11:46:06 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:35.962 11:46:06 -- host/auth.sh@70 -- # keyid=4 00:22:35.962 11:46:06 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.962 11:46:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:35.962 11:46:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.962 11:46:06 -- common/autotest_common.sh@10 -- # set +x 00:22:35.962 11:46:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.962 11:46:06 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:35.962 11:46:06 -- nvmf/common.sh@717 -- # local ip 00:22:35.962 11:46:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:35.962 11:46:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:35.962 11:46:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.962 11:46:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.962 11:46:06 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:35.962 11:46:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:35.962 11:46:06 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:35.962 11:46:06 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:35.962 11:46:06 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:35.962 11:46:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:35.962 11:46:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.962 11:46:06 -- common/autotest_common.sh@10 -- # set +x 00:22:36.221 nvme0n1 00:22:36.221 11:46:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.221 11:46:06 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.221 11:46:06 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:36.221 11:46:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.221 11:46:06 -- common/autotest_common.sh@10 -- # set +x 00:22:36.221 11:46:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.221 11:46:06 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.221 11:46:06 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.221 11:46:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.221 11:46:06 -- common/autotest_common.sh@10 -- # set +x 00:22:36.221 11:46:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.221 11:46:06 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:36.221 11:46:06 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:36.221 11:46:06 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:36.221 11:46:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.221 11:46:06 -- host/auth.sh@44 -- # digest=sha384 00:22:36.221 11:46:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:36.221 11:46:06 -- host/auth.sh@44 -- # keyid=0 00:22:36.221 11:46:06 -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:36.221 11:46:06 -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:36.221 11:46:06 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:36.221 11:46:06 -- host/auth.sh@49 -- # echo ffdhe8192 00:22:36.221 11:46:06 -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:36.221 11:46:06 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: ]] 00:22:36.221 11:46:06 -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:36.221 11:46:06 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 0 00:22:36.221 11:46:06 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:36.221 11:46:06 -- host/auth.sh@70 -- # digest=sha384 00:22:36.221 11:46:06 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:36.221 11:46:06 -- host/auth.sh@70 -- # keyid=0 00:22:36.221 11:46:06 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.221 11:46:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:36.221 11:46:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.221 11:46:06 -- common/autotest_common.sh@10 -- # set +x 00:22:36.221 11:46:06 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.221 11:46:06 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:36.221 11:46:06 -- nvmf/common.sh@717 -- # local ip 00:22:36.221 11:46:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:36.221 11:46:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:36.221 11:46:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.221 11:46:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.221 11:46:06 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:36.221 11:46:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:36.221 11:46:06 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:36.221 11:46:06 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:36.221 11:46:06 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:36.221 11:46:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.221 11:46:06 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.221 11:46:06 -- common/autotest_common.sh@10 -- # set +x 00:22:36.789 nvme0n1 00:22:36.789 11:46:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.789 11:46:07 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.789 11:46:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.789 11:46:07 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:36.789 11:46:07 -- common/autotest_common.sh@10 -- # set +x 00:22:37.049 11:46:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.049 11:46:07 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.049 11:46:07 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.049 11:46:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.049 11:46:07 -- common/autotest_common.sh@10 -- # set +x 00:22:37.049 11:46:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.049 11:46:07 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:37.049 11:46:07 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:37.049 11:46:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.049 11:46:07 -- host/auth.sh@44 -- # digest=sha384 00:22:37.049 11:46:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:37.049 11:46:07 -- host/auth.sh@44 -- # keyid=1 00:22:37.049 11:46:07 -- host/auth.sh@45 -- # key=DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:37.049 11:46:07 -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:37.049 11:46:07 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:37.049 11:46:07 -- host/auth.sh@49 -- # echo ffdhe8192 00:22:37.049 11:46:07 -- host/auth.sh@50 -- # echo DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:37.049 11:46:07 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: ]] 00:22:37.049 11:46:07 -- host/auth.sh@51 -- # echo DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:37.049 11:46:07 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 1 00:22:37.049 11:46:07 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:37.049 11:46:07 -- host/auth.sh@70 -- # digest=sha384 00:22:37.049 11:46:07 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:37.049 11:46:07 -- host/auth.sh@70 -- # keyid=1 00:22:37.049 11:46:07 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.049 11:46:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:37.049 11:46:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.049 11:46:07 -- common/autotest_common.sh@10 -- # set +x 00:22:37.049 11:46:07 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.049 11:46:07 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:37.049 11:46:07 -- nvmf/common.sh@717 -- # local ip 00:22:37.049 11:46:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:37.049 11:46:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:37.049 11:46:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.049 11:46:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.049 11:46:07 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:37.049 11:46:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:37.049 11:46:07 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:37.049 11:46:07 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:37.049 11:46:07 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:37.049 11:46:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.049 11:46:07 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.049 11:46:07 -- common/autotest_common.sh@10 -- # set +x 00:22:37.617 nvme0n1 00:22:37.617 11:46:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.617 11:46:08 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.617 11:46:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.617 11:46:08 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:37.617 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:22:37.617 11:46:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.617 11:46:08 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.617 11:46:08 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.617 11:46:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.617 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:22:37.617 11:46:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.617 11:46:08 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:37.617 11:46:08 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:37.617 11:46:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.617 11:46:08 -- host/auth.sh@44 -- # digest=sha384 00:22:37.617 11:46:08 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:37.617 11:46:08 -- host/auth.sh@44 -- # keyid=2 00:22:37.617 11:46:08 -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:37.617 11:46:08 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:37.617 11:46:08 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:37.617 11:46:08 -- host/auth.sh@49 -- # echo ffdhe8192 00:22:37.617 11:46:08 -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:37.617 11:46:08 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: ]] 00:22:37.617 11:46:08 -- host/auth.sh@51 -- # echo DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:37.617 11:46:08 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 2 00:22:37.617 11:46:08 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:37.617 11:46:08 -- host/auth.sh@70 -- # digest=sha384 00:22:37.617 11:46:08 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:37.617 11:46:08 -- host/auth.sh@70 -- # keyid=2 00:22:37.617 11:46:08 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.617 11:46:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:37.617 11:46:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.617 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:22:37.617 11:46:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.617 11:46:08 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:37.617 11:46:08 -- nvmf/common.sh@717 -- # local ip 00:22:37.617 11:46:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:37.617 11:46:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:37.617 11:46:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.617 11:46:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.617 11:46:08 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:37.617 11:46:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:37.617 11:46:08 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:37.617 11:46:08 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:37.617 11:46:08 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:37.617 11:46:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.617 11:46:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.617 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:22:38.185 nvme0n1 00:22:38.185 11:46:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.185 11:46:08 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.185 11:46:08 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:38.185 11:46:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.185 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:22:38.185 11:46:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.185 11:46:08 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.185 11:46:08 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.185 11:46:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.185 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:22:38.185 11:46:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.185 11:46:08 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:38.185 11:46:08 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:38.185 11:46:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.185 11:46:08 -- host/auth.sh@44 -- # digest=sha384 00:22:38.185 11:46:08 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:38.185 11:46:08 -- host/auth.sh@44 -- # keyid=3 00:22:38.185 11:46:08 -- host/auth.sh@45 -- # key=DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:38.185 11:46:08 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:38.185 11:46:08 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:38.185 11:46:08 -- host/auth.sh@49 -- # echo ffdhe8192 00:22:38.185 11:46:08 -- host/auth.sh@50 -- # echo DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:38.185 11:46:08 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: ]] 00:22:38.185 11:46:08 -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:38.185 11:46:08 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 3 00:22:38.185 11:46:08 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:38.185 11:46:08 -- host/auth.sh@70 -- # digest=sha384 00:22:38.185 11:46:08 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:38.185 11:46:08 -- host/auth.sh@70 -- # keyid=3 00:22:38.185 11:46:08 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.185 11:46:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:38.185 11:46:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.185 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:22:38.185 11:46:08 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.185 11:46:08 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:38.185 11:46:08 -- nvmf/common.sh@717 -- # local ip 00:22:38.185 11:46:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:38.185 11:46:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:38.185 11:46:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.185 11:46:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.185 11:46:08 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:38.185 11:46:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:38.185 11:46:08 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:38.185 11:46:08 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:38.185 11:46:08 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:38.185 11:46:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:38.185 11:46:08 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.185 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:22:38.753 nvme0n1 00:22:38.753 11:46:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.753 11:46:09 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.753 11:46:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.753 11:46:09 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:38.753 11:46:09 -- common/autotest_common.sh@10 -- # set +x 00:22:38.753 11:46:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.753 11:46:09 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.753 11:46:09 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.753 11:46:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.753 11:46:09 -- common/autotest_common.sh@10 -- # set +x 00:22:39.012 11:46:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.012 11:46:09 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:39.012 11:46:09 -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:39.012 11:46:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.012 11:46:09 -- host/auth.sh@44 -- # digest=sha384 00:22:39.012 11:46:09 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:39.012 11:46:09 -- host/auth.sh@44 -- # keyid=4 00:22:39.012 11:46:09 -- host/auth.sh@45 -- # key=DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:39.012 11:46:09 -- host/auth.sh@46 -- # ckey= 00:22:39.012 11:46:09 -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:39.012 11:46:09 -- host/auth.sh@49 -- # echo ffdhe8192 00:22:39.012 11:46:09 -- host/auth.sh@50 -- # echo DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:39.012 11:46:09 -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:39.012 11:46:09 -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 4 00:22:39.012 11:46:09 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:39.012 11:46:09 -- host/auth.sh@70 -- # digest=sha384 00:22:39.012 11:46:09 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:39.012 11:46:09 -- host/auth.sh@70 -- # keyid=4 00:22:39.012 11:46:09 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.012 11:46:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:39.012 11:46:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.012 11:46:09 -- common/autotest_common.sh@10 -- # set +x 00:22:39.012 11:46:09 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.012 11:46:09 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:39.012 11:46:09 -- nvmf/common.sh@717 -- # local ip 00:22:39.012 11:46:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:39.012 11:46:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:39.012 11:46:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.012 11:46:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.012 11:46:09 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:39.012 11:46:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:39.012 11:46:09 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:39.012 11:46:09 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:39.012 11:46:09 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:39.012 11:46:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:39.012 11:46:09 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.012 11:46:09 -- common/autotest_common.sh@10 -- # set +x 00:22:39.580 nvme0n1 00:22:39.580 11:46:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.580 11:46:10 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.580 11:46:10 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:39.580 11:46:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.580 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:22:39.580 11:46:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.580 11:46:10 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.580 11:46:10 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.581 11:46:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.581 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:22:39.581 11:46:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.581 11:46:10 -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:22:39.581 11:46:10 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:39.581 11:46:10 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:39.581 11:46:10 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:39.581 11:46:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.581 11:46:10 -- host/auth.sh@44 -- # digest=sha512 00:22:39.581 11:46:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:39.581 11:46:10 -- host/auth.sh@44 -- # keyid=0 00:22:39.581 11:46:10 -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:39.581 11:46:10 -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:39.581 11:46:10 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:39.581 11:46:10 -- host/auth.sh@49 -- # echo ffdhe2048 00:22:39.581 11:46:10 -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:39.581 11:46:10 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: ]] 00:22:39.581 11:46:10 -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:39.581 11:46:10 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 0 00:22:39.581 11:46:10 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:39.581 11:46:10 -- host/auth.sh@70 -- # digest=sha512 00:22:39.581 11:46:10 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:39.581 11:46:10 -- host/auth.sh@70 -- # keyid=0 00:22:39.581 11:46:10 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.581 11:46:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:39.581 11:46:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.581 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:22:39.581 11:46:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.581 11:46:10 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:39.581 11:46:10 -- nvmf/common.sh@717 -- # local ip 00:22:39.581 11:46:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:39.581 11:46:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:39.581 11:46:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.581 11:46:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.581 11:46:10 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:39.581 11:46:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:39.581 11:46:10 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:39.581 11:46:10 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:39.581 11:46:10 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:39.581 11:46:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.581 11:46:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.581 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:22:39.840 nvme0n1 00:22:39.840 11:46:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.840 11:46:10 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.840 11:46:10 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:39.840 11:46:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.840 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:22:39.840 11:46:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.840 11:46:10 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.840 11:46:10 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.840 11:46:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.840 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:22:39.840 11:46:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.840 11:46:10 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:39.840 11:46:10 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:39.840 11:46:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.840 11:46:10 -- host/auth.sh@44 -- # digest=sha512 00:22:39.840 11:46:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:39.840 11:46:10 -- host/auth.sh@44 -- # keyid=1 00:22:39.840 11:46:10 -- host/auth.sh@45 -- # key=DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:39.840 11:46:10 -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:39.840 11:46:10 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:39.840 11:46:10 -- host/auth.sh@49 -- # echo ffdhe2048 00:22:39.840 11:46:10 -- host/auth.sh@50 -- # echo DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:39.840 11:46:10 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: ]] 00:22:39.840 11:46:10 -- host/auth.sh@51 -- # echo DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:39.840 11:46:10 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 1 00:22:39.840 11:46:10 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:39.840 11:46:10 -- host/auth.sh@70 -- # digest=sha512 00:22:39.840 11:46:10 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:39.840 11:46:10 -- host/auth.sh@70 -- # keyid=1 00:22:39.840 11:46:10 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.840 11:46:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:39.840 11:46:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.840 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:22:39.840 11:46:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.840 11:46:10 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:39.840 11:46:10 -- nvmf/common.sh@717 -- # local ip 00:22:39.840 11:46:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:39.840 11:46:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:39.840 11:46:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.840 11:46:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.840 11:46:10 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:39.840 11:46:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:39.840 11:46:10 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:39.840 11:46:10 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:39.840 11:46:10 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:39.840 11:46:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.840 11:46:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.840 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:22:40.100 nvme0n1 00:22:40.100 11:46:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.100 11:46:10 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.100 11:46:10 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:40.100 11:46:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.100 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:22:40.100 11:46:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.100 11:46:10 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.100 11:46:10 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.100 11:46:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.100 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:22:40.100 11:46:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.100 11:46:10 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:40.100 11:46:10 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:40.100 11:46:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.100 11:46:10 -- host/auth.sh@44 -- # digest=sha512 00:22:40.100 11:46:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:40.100 11:46:10 -- host/auth.sh@44 -- # keyid=2 00:22:40.100 11:46:10 -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:40.100 11:46:10 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:40.100 11:46:10 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:40.100 11:46:10 -- host/auth.sh@49 -- # echo ffdhe2048 00:22:40.100 11:46:10 -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:40.100 11:46:10 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: ]] 00:22:40.100 11:46:10 -- host/auth.sh@51 -- # echo DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:40.100 11:46:10 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 2 00:22:40.100 11:46:10 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:40.100 11:46:10 -- host/auth.sh@70 -- # digest=sha512 00:22:40.100 11:46:10 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:40.100 11:46:10 -- host/auth.sh@70 -- # keyid=2 00:22:40.100 11:46:10 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.100 11:46:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:40.100 11:46:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.100 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:22:40.100 11:46:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.100 11:46:10 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:40.100 11:46:10 -- nvmf/common.sh@717 -- # local ip 00:22:40.100 11:46:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:40.100 11:46:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:40.100 11:46:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.100 11:46:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.100 11:46:10 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:40.100 11:46:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:40.100 11:46:10 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:40.100 11:46:10 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:40.100 11:46:10 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:40.100 11:46:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.100 11:46:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.100 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:22:40.360 nvme0n1 00:22:40.360 11:46:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.360 11:46:10 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.360 11:46:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.360 11:46:10 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:40.360 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:22:40.360 11:46:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.360 11:46:10 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.360 11:46:10 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.360 11:46:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.360 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:22:40.360 11:46:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.360 11:46:10 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:40.360 11:46:10 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:40.360 11:46:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.360 11:46:10 -- host/auth.sh@44 -- # digest=sha512 00:22:40.360 11:46:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:40.360 11:46:10 -- host/auth.sh@44 -- # keyid=3 00:22:40.360 11:46:10 -- host/auth.sh@45 -- # key=DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:40.360 11:46:10 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:40.360 11:46:10 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:40.360 11:46:10 -- host/auth.sh@49 -- # echo ffdhe2048 00:22:40.360 11:46:10 -- host/auth.sh@50 -- # echo DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:40.360 11:46:10 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: ]] 00:22:40.360 11:46:10 -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:40.360 11:46:10 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 3 00:22:40.360 11:46:10 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:40.360 11:46:10 -- host/auth.sh@70 -- # digest=sha512 00:22:40.360 11:46:10 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:40.360 11:46:10 -- host/auth.sh@70 -- # keyid=3 00:22:40.360 11:46:10 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.360 11:46:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:40.360 11:46:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.360 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:22:40.360 11:46:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.360 11:46:10 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:40.360 11:46:10 -- nvmf/common.sh@717 -- # local ip 00:22:40.360 11:46:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:40.360 11:46:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:40.360 11:46:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.360 11:46:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.360 11:46:10 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:40.360 11:46:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:40.360 11:46:10 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:40.360 11:46:10 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:40.360 11:46:10 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:40.360 11:46:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:40.360 11:46:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.360 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:22:40.620 nvme0n1 00:22:40.620 11:46:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.620 11:46:11 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.620 11:46:11 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:40.620 11:46:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.620 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:22:40.620 11:46:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.620 11:46:11 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.620 11:46:11 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.620 11:46:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.620 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:22:40.620 11:46:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.620 11:46:11 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:40.620 11:46:11 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:40.620 11:46:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.620 11:46:11 -- host/auth.sh@44 -- # digest=sha512 00:22:40.620 11:46:11 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:40.620 11:46:11 -- host/auth.sh@44 -- # keyid=4 00:22:40.620 11:46:11 -- host/auth.sh@45 -- # key=DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:40.620 11:46:11 -- host/auth.sh@46 -- # ckey= 00:22:40.620 11:46:11 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:40.620 11:46:11 -- host/auth.sh@49 -- # echo ffdhe2048 00:22:40.620 11:46:11 -- host/auth.sh@50 -- # echo DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:40.620 11:46:11 -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:40.620 11:46:11 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 4 00:22:40.620 11:46:11 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:40.620 11:46:11 -- host/auth.sh@70 -- # digest=sha512 00:22:40.620 11:46:11 -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:40.620 11:46:11 -- host/auth.sh@70 -- # keyid=4 00:22:40.620 11:46:11 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.620 11:46:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:40.620 11:46:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.620 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:22:40.620 11:46:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.620 11:46:11 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:40.620 11:46:11 -- nvmf/common.sh@717 -- # local ip 00:22:40.620 11:46:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:40.620 11:46:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:40.620 11:46:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.620 11:46:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.620 11:46:11 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:40.620 11:46:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:40.620 11:46:11 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:40.620 11:46:11 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:40.620 11:46:11 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:40.620 11:46:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:40.620 11:46:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.620 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:22:40.879 nvme0n1 00:22:40.879 11:46:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.879 11:46:11 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.879 11:46:11 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:40.879 11:46:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.879 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:22:40.879 11:46:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.879 11:46:11 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.879 11:46:11 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.879 11:46:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.879 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:22:40.879 11:46:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.879 11:46:11 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:40.879 11:46:11 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:40.879 11:46:11 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:40.879 11:46:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.879 11:46:11 -- host/auth.sh@44 -- # digest=sha512 00:22:40.879 11:46:11 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:40.879 11:46:11 -- host/auth.sh@44 -- # keyid=0 00:22:40.879 11:46:11 -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:40.879 11:46:11 -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:40.879 11:46:11 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:40.879 11:46:11 -- host/auth.sh@49 -- # echo ffdhe3072 00:22:40.879 11:46:11 -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:40.879 11:46:11 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: ]] 00:22:40.879 11:46:11 -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:40.879 11:46:11 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 0 00:22:40.879 11:46:11 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:40.879 11:46:11 -- host/auth.sh@70 -- # digest=sha512 00:22:40.879 11:46:11 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:40.879 11:46:11 -- host/auth.sh@70 -- # keyid=0 00:22:40.879 11:46:11 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.879 11:46:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:40.879 11:46:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.879 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:22:40.879 11:46:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.879 11:46:11 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:40.879 11:46:11 -- nvmf/common.sh@717 -- # local ip 00:22:40.879 11:46:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:40.879 11:46:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:40.879 11:46:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.879 11:46:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.879 11:46:11 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:40.879 11:46:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:40.879 11:46:11 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:40.879 11:46:11 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:40.879 11:46:11 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:40.879 11:46:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.879 11:46:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.879 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:22:41.139 nvme0n1 00:22:41.139 11:46:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.139 11:46:11 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.139 11:46:11 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:41.139 11:46:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.139 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:22:41.139 11:46:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.139 11:46:11 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.139 11:46:11 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.139 11:46:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.139 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:22:41.139 11:46:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.139 11:46:11 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:41.139 11:46:11 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:41.139 11:46:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.139 11:46:11 -- host/auth.sh@44 -- # digest=sha512 00:22:41.139 11:46:11 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:41.139 11:46:11 -- host/auth.sh@44 -- # keyid=1 00:22:41.139 11:46:11 -- host/auth.sh@45 -- # key=DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:41.139 11:46:11 -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:41.139 11:46:11 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.139 11:46:11 -- host/auth.sh@49 -- # echo ffdhe3072 00:22:41.139 11:46:11 -- host/auth.sh@50 -- # echo DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:41.139 11:46:11 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: ]] 00:22:41.139 11:46:11 -- host/auth.sh@51 -- # echo DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:41.139 11:46:11 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 1 00:22:41.139 11:46:11 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:41.139 11:46:11 -- host/auth.sh@70 -- # digest=sha512 00:22:41.139 11:46:11 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:41.139 11:46:11 -- host/auth.sh@70 -- # keyid=1 00:22:41.139 11:46:11 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.139 11:46:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.139 11:46:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.139 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:22:41.139 11:46:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.139 11:46:11 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:41.139 11:46:11 -- nvmf/common.sh@717 -- # local ip 00:22:41.139 11:46:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:41.139 11:46:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:41.139 11:46:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.139 11:46:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.139 11:46:11 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:41.139 11:46:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:41.139 11:46:11 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:41.139 11:46:11 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:41.139 11:46:11 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:41.139 11:46:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.139 11:46:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.139 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:22:41.399 nvme0n1 00:22:41.399 11:46:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.399 11:46:11 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.399 11:46:11 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:41.399 11:46:11 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.399 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:22:41.399 11:46:11 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.399 11:46:12 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.399 11:46:12 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.399 11:46:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.399 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:22:41.399 11:46:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.399 11:46:12 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:41.399 11:46:12 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:41.399 11:46:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.399 11:46:12 -- host/auth.sh@44 -- # digest=sha512 00:22:41.399 11:46:12 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:41.399 11:46:12 -- host/auth.sh@44 -- # keyid=2 00:22:41.399 11:46:12 -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:41.399 11:46:12 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:41.399 11:46:12 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.399 11:46:12 -- host/auth.sh@49 -- # echo ffdhe3072 00:22:41.399 11:46:12 -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:41.399 11:46:12 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: ]] 00:22:41.399 11:46:12 -- host/auth.sh@51 -- # echo DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:41.399 11:46:12 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 2 00:22:41.399 11:46:12 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:41.399 11:46:12 -- host/auth.sh@70 -- # digest=sha512 00:22:41.399 11:46:12 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:41.399 11:46:12 -- host/auth.sh@70 -- # keyid=2 00:22:41.399 11:46:12 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.399 11:46:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.399 11:46:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.399 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:22:41.399 11:46:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.399 11:46:12 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:41.399 11:46:12 -- nvmf/common.sh@717 -- # local ip 00:22:41.399 11:46:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:41.399 11:46:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:41.399 11:46:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.399 11:46:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.399 11:46:12 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:41.399 11:46:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:41.399 11:46:12 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:41.399 11:46:12 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:41.399 11:46:12 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:41.399 11:46:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.399 11:46:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.399 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:22:41.658 nvme0n1 00:22:41.658 11:46:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.658 11:46:12 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.658 11:46:12 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:41.658 11:46:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.658 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:22:41.658 11:46:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.658 11:46:12 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.658 11:46:12 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.658 11:46:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.658 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:22:41.658 11:46:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.658 11:46:12 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:41.658 11:46:12 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:41.658 11:46:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.658 11:46:12 -- host/auth.sh@44 -- # digest=sha512 00:22:41.658 11:46:12 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:41.658 11:46:12 -- host/auth.sh@44 -- # keyid=3 00:22:41.658 11:46:12 -- host/auth.sh@45 -- # key=DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:41.658 11:46:12 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:41.658 11:46:12 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.658 11:46:12 -- host/auth.sh@49 -- # echo ffdhe3072 00:22:41.658 11:46:12 -- host/auth.sh@50 -- # echo DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:41.658 11:46:12 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: ]] 00:22:41.658 11:46:12 -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:41.658 11:46:12 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 3 00:22:41.658 11:46:12 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:41.658 11:46:12 -- host/auth.sh@70 -- # digest=sha512 00:22:41.658 11:46:12 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:41.658 11:46:12 -- host/auth.sh@70 -- # keyid=3 00:22:41.658 11:46:12 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.658 11:46:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.658 11:46:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.658 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:22:41.658 11:46:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.658 11:46:12 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:41.658 11:46:12 -- nvmf/common.sh@717 -- # local ip 00:22:41.658 11:46:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:41.658 11:46:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:41.658 11:46:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.658 11:46:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.658 11:46:12 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:41.658 11:46:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:41.658 11:46:12 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:41.658 11:46:12 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:41.658 11:46:12 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:41.658 11:46:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:41.658 11:46:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.658 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:22:41.917 nvme0n1 00:22:41.917 11:46:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.917 11:46:12 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.917 11:46:12 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:41.917 11:46:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.917 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:22:41.917 11:46:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.917 11:46:12 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.917 11:46:12 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.917 11:46:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.917 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:22:41.917 11:46:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.917 11:46:12 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:41.917 11:46:12 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:41.917 11:46:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.917 11:46:12 -- host/auth.sh@44 -- # digest=sha512 00:22:41.917 11:46:12 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:41.917 11:46:12 -- host/auth.sh@44 -- # keyid=4 00:22:41.917 11:46:12 -- host/auth.sh@45 -- # key=DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:41.917 11:46:12 -- host/auth.sh@46 -- # ckey= 00:22:41.917 11:46:12 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.917 11:46:12 -- host/auth.sh@49 -- # echo ffdhe3072 00:22:41.917 11:46:12 -- host/auth.sh@50 -- # echo DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:41.917 11:46:12 -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:41.917 11:46:12 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 4 00:22:41.917 11:46:12 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:41.917 11:46:12 -- host/auth.sh@70 -- # digest=sha512 00:22:41.917 11:46:12 -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:41.917 11:46:12 -- host/auth.sh@70 -- # keyid=4 00:22:41.918 11:46:12 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.918 11:46:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.918 11:46:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.918 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:22:41.918 11:46:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.918 11:46:12 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:41.918 11:46:12 -- nvmf/common.sh@717 -- # local ip 00:22:41.918 11:46:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:41.918 11:46:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:41.918 11:46:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.918 11:46:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.918 11:46:12 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:41.918 11:46:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:41.918 11:46:12 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:41.918 11:46:12 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:41.918 11:46:12 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:41.918 11:46:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:41.918 11:46:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.918 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:22:42.177 nvme0n1 00:22:42.177 11:46:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.177 11:46:12 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.177 11:46:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.177 11:46:12 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:42.177 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:22:42.177 11:46:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.177 11:46:12 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.177 11:46:12 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.177 11:46:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.177 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:22:42.177 11:46:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.177 11:46:12 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:42.177 11:46:12 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:42.177 11:46:12 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:42.177 11:46:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.177 11:46:12 -- host/auth.sh@44 -- # digest=sha512 00:22:42.177 11:46:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:42.177 11:46:12 -- host/auth.sh@44 -- # keyid=0 00:22:42.177 11:46:12 -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:42.177 11:46:12 -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:42.177 11:46:12 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.177 11:46:12 -- host/auth.sh@49 -- # echo ffdhe4096 00:22:42.177 11:46:12 -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:42.178 11:46:12 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: ]] 00:22:42.178 11:46:12 -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:42.178 11:46:12 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 0 00:22:42.178 11:46:12 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:42.178 11:46:12 -- host/auth.sh@70 -- # digest=sha512 00:22:42.178 11:46:12 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:42.178 11:46:12 -- host/auth.sh@70 -- # keyid=0 00:22:42.178 11:46:12 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.178 11:46:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.178 11:46:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.178 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:22:42.178 11:46:12 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.178 11:46:12 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:42.178 11:46:12 -- nvmf/common.sh@717 -- # local ip 00:22:42.178 11:46:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:42.178 11:46:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:42.178 11:46:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.178 11:46:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.178 11:46:12 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:42.178 11:46:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:42.178 11:46:12 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:42.178 11:46:12 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:42.178 11:46:12 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:42.178 11:46:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.178 11:46:12 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.178 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:22:42.436 nvme0n1 00:22:42.436 11:46:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.436 11:46:13 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.436 11:46:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.436 11:46:13 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:42.436 11:46:13 -- common/autotest_common.sh@10 -- # set +x 00:22:42.436 11:46:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.436 11:46:13 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.436 11:46:13 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.436 11:46:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.436 11:46:13 -- common/autotest_common.sh@10 -- # set +x 00:22:42.695 11:46:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.695 11:46:13 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:42.695 11:46:13 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:42.695 11:46:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.695 11:46:13 -- host/auth.sh@44 -- # digest=sha512 00:22:42.695 11:46:13 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:42.695 11:46:13 -- host/auth.sh@44 -- # keyid=1 00:22:42.695 11:46:13 -- host/auth.sh@45 -- # key=DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:42.695 11:46:13 -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:42.695 11:46:13 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.695 11:46:13 -- host/auth.sh@49 -- # echo ffdhe4096 00:22:42.695 11:46:13 -- host/auth.sh@50 -- # echo DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:42.695 11:46:13 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: ]] 00:22:42.695 11:46:13 -- host/auth.sh@51 -- # echo DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:42.695 11:46:13 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 1 00:22:42.695 11:46:13 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:42.695 11:46:13 -- host/auth.sh@70 -- # digest=sha512 00:22:42.695 11:46:13 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:42.695 11:46:13 -- host/auth.sh@70 -- # keyid=1 00:22:42.695 11:46:13 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.695 11:46:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.695 11:46:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.695 11:46:13 -- common/autotest_common.sh@10 -- # set +x 00:22:42.695 11:46:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.695 11:46:13 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:42.695 11:46:13 -- nvmf/common.sh@717 -- # local ip 00:22:42.695 11:46:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:42.695 11:46:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:42.695 11:46:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.695 11:46:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.695 11:46:13 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:42.695 11:46:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:42.695 11:46:13 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:42.695 11:46:13 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:42.695 11:46:13 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:42.695 11:46:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.695 11:46:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.695 11:46:13 -- common/autotest_common.sh@10 -- # set +x 00:22:42.954 nvme0n1 00:22:42.954 11:46:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.954 11:46:13 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.954 11:46:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.954 11:46:13 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:42.954 11:46:13 -- common/autotest_common.sh@10 -- # set +x 00:22:42.954 11:46:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.954 11:46:13 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.954 11:46:13 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.954 11:46:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.954 11:46:13 -- common/autotest_common.sh@10 -- # set +x 00:22:42.954 11:46:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.954 11:46:13 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:42.954 11:46:13 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:42.954 11:46:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.954 11:46:13 -- host/auth.sh@44 -- # digest=sha512 00:22:42.954 11:46:13 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:42.954 11:46:13 -- host/auth.sh@44 -- # keyid=2 00:22:42.954 11:46:13 -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:42.955 11:46:13 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:42.955 11:46:13 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.955 11:46:13 -- host/auth.sh@49 -- # echo ffdhe4096 00:22:42.955 11:46:13 -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:42.955 11:46:13 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: ]] 00:22:42.955 11:46:13 -- host/auth.sh@51 -- # echo DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:42.955 11:46:13 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 2 00:22:42.955 11:46:13 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:42.955 11:46:13 -- host/auth.sh@70 -- # digest=sha512 00:22:42.955 11:46:13 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:42.955 11:46:13 -- host/auth.sh@70 -- # keyid=2 00:22:42.955 11:46:13 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.955 11:46:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.955 11:46:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.955 11:46:13 -- common/autotest_common.sh@10 -- # set +x 00:22:42.955 11:46:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.955 11:46:13 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:42.955 11:46:13 -- nvmf/common.sh@717 -- # local ip 00:22:42.955 11:46:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:42.955 11:46:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:42.955 11:46:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.955 11:46:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.955 11:46:13 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:42.955 11:46:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:42.955 11:46:13 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:42.955 11:46:13 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:42.955 11:46:13 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:42.955 11:46:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:42.955 11:46:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.955 11:46:13 -- common/autotest_common.sh@10 -- # set +x 00:22:43.214 nvme0n1 00:22:43.214 11:46:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.214 11:46:13 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.214 11:46:13 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:43.214 11:46:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.214 11:46:13 -- common/autotest_common.sh@10 -- # set +x 00:22:43.214 11:46:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.214 11:46:13 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.214 11:46:13 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.214 11:46:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.214 11:46:13 -- common/autotest_common.sh@10 -- # set +x 00:22:43.214 11:46:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.214 11:46:13 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:43.214 11:46:13 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:43.214 11:46:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.214 11:46:13 -- host/auth.sh@44 -- # digest=sha512 00:22:43.214 11:46:13 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:43.214 11:46:13 -- host/auth.sh@44 -- # keyid=3 00:22:43.214 11:46:13 -- host/auth.sh@45 -- # key=DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:43.214 11:46:13 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:43.214 11:46:13 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:43.214 11:46:13 -- host/auth.sh@49 -- # echo ffdhe4096 00:22:43.214 11:46:13 -- host/auth.sh@50 -- # echo DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:43.214 11:46:13 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: ]] 00:22:43.214 11:46:13 -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:43.214 11:46:13 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 3 00:22:43.214 11:46:13 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:43.214 11:46:13 -- host/auth.sh@70 -- # digest=sha512 00:22:43.214 11:46:13 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:43.214 11:46:13 -- host/auth.sh@70 -- # keyid=3 00:22:43.214 11:46:13 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.214 11:46:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:43.214 11:46:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.214 11:46:13 -- common/autotest_common.sh@10 -- # set +x 00:22:43.214 11:46:13 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.214 11:46:13 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:43.214 11:46:13 -- nvmf/common.sh@717 -- # local ip 00:22:43.214 11:46:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:43.214 11:46:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:43.214 11:46:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.214 11:46:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.214 11:46:13 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:43.214 11:46:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:43.214 11:46:13 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:43.214 11:46:13 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:43.214 11:46:13 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:43.214 11:46:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:43.214 11:46:13 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.214 11:46:13 -- common/autotest_common.sh@10 -- # set +x 00:22:43.474 nvme0n1 00:22:43.474 11:46:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.474 11:46:14 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.474 11:46:14 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:43.474 11:46:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.474 11:46:14 -- common/autotest_common.sh@10 -- # set +x 00:22:43.474 11:46:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.474 11:46:14 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.474 11:46:14 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.474 11:46:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.474 11:46:14 -- common/autotest_common.sh@10 -- # set +x 00:22:43.474 11:46:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.474 11:46:14 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:43.474 11:46:14 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:43.474 11:46:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.474 11:46:14 -- host/auth.sh@44 -- # digest=sha512 00:22:43.474 11:46:14 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:43.474 11:46:14 -- host/auth.sh@44 -- # keyid=4 00:22:43.474 11:46:14 -- host/auth.sh@45 -- # key=DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:43.474 11:46:14 -- host/auth.sh@46 -- # ckey= 00:22:43.474 11:46:14 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:43.474 11:46:14 -- host/auth.sh@49 -- # echo ffdhe4096 00:22:43.474 11:46:14 -- host/auth.sh@50 -- # echo DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:43.474 11:46:14 -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:43.474 11:46:14 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 4 00:22:43.474 11:46:14 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:43.474 11:46:14 -- host/auth.sh@70 -- # digest=sha512 00:22:43.474 11:46:14 -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:43.474 11:46:14 -- host/auth.sh@70 -- # keyid=4 00:22:43.474 11:46:14 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.474 11:46:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:43.474 11:46:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.474 11:46:14 -- common/autotest_common.sh@10 -- # set +x 00:22:43.474 11:46:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.474 11:46:14 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:43.474 11:46:14 -- nvmf/common.sh@717 -- # local ip 00:22:43.474 11:46:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:43.474 11:46:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:43.474 11:46:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.474 11:46:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.474 11:46:14 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:43.474 11:46:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:43.474 11:46:14 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:43.474 11:46:14 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:43.474 11:46:14 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:43.474 11:46:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:43.474 11:46:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.474 11:46:14 -- common/autotest_common.sh@10 -- # set +x 00:22:43.733 nvme0n1 00:22:43.733 11:46:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.733 11:46:14 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.733 11:46:14 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:43.733 11:46:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.733 11:46:14 -- common/autotest_common.sh@10 -- # set +x 00:22:43.733 11:46:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.991 11:46:14 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.991 11:46:14 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.991 11:46:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.991 11:46:14 -- common/autotest_common.sh@10 -- # set +x 00:22:43.991 11:46:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.992 11:46:14 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:43.992 11:46:14 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:43.992 11:46:14 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:43.992 11:46:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.992 11:46:14 -- host/auth.sh@44 -- # digest=sha512 00:22:43.992 11:46:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:43.992 11:46:14 -- host/auth.sh@44 -- # keyid=0 00:22:43.992 11:46:14 -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:43.992 11:46:14 -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:43.992 11:46:14 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:43.992 11:46:14 -- host/auth.sh@49 -- # echo ffdhe6144 00:22:43.992 11:46:14 -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:43.992 11:46:14 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: ]] 00:22:43.992 11:46:14 -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:43.992 11:46:14 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 0 00:22:43.992 11:46:14 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:43.992 11:46:14 -- host/auth.sh@70 -- # digest=sha512 00:22:43.992 11:46:14 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:43.992 11:46:14 -- host/auth.sh@70 -- # keyid=0 00:22:43.992 11:46:14 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.992 11:46:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:43.992 11:46:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.992 11:46:14 -- common/autotest_common.sh@10 -- # set +x 00:22:43.992 11:46:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.992 11:46:14 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:43.992 11:46:14 -- nvmf/common.sh@717 -- # local ip 00:22:43.992 11:46:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:43.992 11:46:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:43.992 11:46:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.992 11:46:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.992 11:46:14 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:43.992 11:46:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:43.992 11:46:14 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:43.992 11:46:14 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:43.992 11:46:14 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:43.992 11:46:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.992 11:46:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.992 11:46:14 -- common/autotest_common.sh@10 -- # set +x 00:22:44.251 nvme0n1 00:22:44.251 11:46:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.251 11:46:14 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.251 11:46:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.251 11:46:14 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:44.251 11:46:14 -- common/autotest_common.sh@10 -- # set +x 00:22:44.251 11:46:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.252 11:46:14 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.252 11:46:14 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.252 11:46:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.252 11:46:14 -- common/autotest_common.sh@10 -- # set +x 00:22:44.252 11:46:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.252 11:46:14 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:44.252 11:46:14 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:44.252 11:46:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.252 11:46:14 -- host/auth.sh@44 -- # digest=sha512 00:22:44.252 11:46:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:44.252 11:46:14 -- host/auth.sh@44 -- # keyid=1 00:22:44.252 11:46:14 -- host/auth.sh@45 -- # key=DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:44.252 11:46:14 -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:44.252 11:46:14 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:44.252 11:46:14 -- host/auth.sh@49 -- # echo ffdhe6144 00:22:44.252 11:46:14 -- host/auth.sh@50 -- # echo DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:44.252 11:46:14 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: ]] 00:22:44.252 11:46:14 -- host/auth.sh@51 -- # echo DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:44.252 11:46:14 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 1 00:22:44.252 11:46:14 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:44.252 11:46:14 -- host/auth.sh@70 -- # digest=sha512 00:22:44.252 11:46:14 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:44.252 11:46:14 -- host/auth.sh@70 -- # keyid=1 00:22:44.252 11:46:14 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.252 11:46:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:44.252 11:46:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.252 11:46:14 -- common/autotest_common.sh@10 -- # set +x 00:22:44.252 11:46:14 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.252 11:46:14 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:44.252 11:46:14 -- nvmf/common.sh@717 -- # local ip 00:22:44.252 11:46:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:44.252 11:46:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:44.252 11:46:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.252 11:46:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.252 11:46:14 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:44.252 11:46:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:44.252 11:46:14 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:44.252 11:46:14 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:44.252 11:46:14 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:44.252 11:46:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.252 11:46:14 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.252 11:46:14 -- common/autotest_common.sh@10 -- # set +x 00:22:44.820 nvme0n1 00:22:44.820 11:46:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.820 11:46:15 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.820 11:46:15 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:44.820 11:46:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.820 11:46:15 -- common/autotest_common.sh@10 -- # set +x 00:22:44.820 11:46:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.820 11:46:15 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.820 11:46:15 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.820 11:46:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.820 11:46:15 -- common/autotest_common.sh@10 -- # set +x 00:22:44.820 11:46:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.820 11:46:15 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:44.820 11:46:15 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:44.820 11:46:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.820 11:46:15 -- host/auth.sh@44 -- # digest=sha512 00:22:44.820 11:46:15 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:44.820 11:46:15 -- host/auth.sh@44 -- # keyid=2 00:22:44.820 11:46:15 -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:44.820 11:46:15 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:44.820 11:46:15 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:44.820 11:46:15 -- host/auth.sh@49 -- # echo ffdhe6144 00:22:44.820 11:46:15 -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:44.820 11:46:15 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: ]] 00:22:44.820 11:46:15 -- host/auth.sh@51 -- # echo DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:44.820 11:46:15 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 2 00:22:44.820 11:46:15 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:44.820 11:46:15 -- host/auth.sh@70 -- # digest=sha512 00:22:44.820 11:46:15 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:44.820 11:46:15 -- host/auth.sh@70 -- # keyid=2 00:22:44.820 11:46:15 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.820 11:46:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:44.820 11:46:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.820 11:46:15 -- common/autotest_common.sh@10 -- # set +x 00:22:44.820 11:46:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.820 11:46:15 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:44.820 11:46:15 -- nvmf/common.sh@717 -- # local ip 00:22:44.820 11:46:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:44.820 11:46:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:44.820 11:46:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.820 11:46:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.820 11:46:15 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:44.820 11:46:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:44.820 11:46:15 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:44.820 11:46:15 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:44.820 11:46:15 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:44.820 11:46:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.820 11:46:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.820 11:46:15 -- common/autotest_common.sh@10 -- # set +x 00:22:45.078 nvme0n1 00:22:45.079 11:46:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.079 11:46:15 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.079 11:46:15 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:45.079 11:46:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.079 11:46:15 -- common/autotest_common.sh@10 -- # set +x 00:22:45.079 11:46:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.338 11:46:15 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.338 11:46:15 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.338 11:46:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.338 11:46:15 -- common/autotest_common.sh@10 -- # set +x 00:22:45.338 11:46:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.338 11:46:15 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:45.338 11:46:15 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:45.338 11:46:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.338 11:46:15 -- host/auth.sh@44 -- # digest=sha512 00:22:45.338 11:46:15 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:45.338 11:46:15 -- host/auth.sh@44 -- # keyid=3 00:22:45.338 11:46:15 -- host/auth.sh@45 -- # key=DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:45.338 11:46:15 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:45.338 11:46:15 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:45.338 11:46:15 -- host/auth.sh@49 -- # echo ffdhe6144 00:22:45.338 11:46:15 -- host/auth.sh@50 -- # echo DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:45.338 11:46:15 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: ]] 00:22:45.338 11:46:15 -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:45.338 11:46:15 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 3 00:22:45.338 11:46:15 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:45.338 11:46:15 -- host/auth.sh@70 -- # digest=sha512 00:22:45.338 11:46:15 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:45.338 11:46:15 -- host/auth.sh@70 -- # keyid=3 00:22:45.338 11:46:15 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.338 11:46:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:45.338 11:46:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.338 11:46:15 -- common/autotest_common.sh@10 -- # set +x 00:22:45.338 11:46:15 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.338 11:46:15 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:45.338 11:46:15 -- nvmf/common.sh@717 -- # local ip 00:22:45.338 11:46:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:45.338 11:46:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:45.338 11:46:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.338 11:46:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.338 11:46:15 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:45.338 11:46:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:45.338 11:46:15 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:45.338 11:46:15 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:45.338 11:46:15 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:45.338 11:46:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:45.338 11:46:15 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.338 11:46:15 -- common/autotest_common.sh@10 -- # set +x 00:22:45.597 nvme0n1 00:22:45.597 11:46:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.597 11:46:16 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.597 11:46:16 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:45.597 11:46:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.597 11:46:16 -- common/autotest_common.sh@10 -- # set +x 00:22:45.597 11:46:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.597 11:46:16 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.597 11:46:16 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.597 11:46:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.597 11:46:16 -- common/autotest_common.sh@10 -- # set +x 00:22:45.597 11:46:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.597 11:46:16 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:45.597 11:46:16 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:45.597 11:46:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.597 11:46:16 -- host/auth.sh@44 -- # digest=sha512 00:22:45.597 11:46:16 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:45.597 11:46:16 -- host/auth.sh@44 -- # keyid=4 00:22:45.597 11:46:16 -- host/auth.sh@45 -- # key=DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:45.597 11:46:16 -- host/auth.sh@46 -- # ckey= 00:22:45.597 11:46:16 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:45.597 11:46:16 -- host/auth.sh@49 -- # echo ffdhe6144 00:22:45.597 11:46:16 -- host/auth.sh@50 -- # echo DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:45.597 11:46:16 -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:45.597 11:46:16 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 4 00:22:45.597 11:46:16 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:45.597 11:46:16 -- host/auth.sh@70 -- # digest=sha512 00:22:45.597 11:46:16 -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:45.597 11:46:16 -- host/auth.sh@70 -- # keyid=4 00:22:45.597 11:46:16 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.597 11:46:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:45.597 11:46:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.597 11:46:16 -- common/autotest_common.sh@10 -- # set +x 00:22:45.597 11:46:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.597 11:46:16 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:45.597 11:46:16 -- nvmf/common.sh@717 -- # local ip 00:22:45.597 11:46:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:45.597 11:46:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:45.597 11:46:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.597 11:46:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.597 11:46:16 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:45.597 11:46:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:45.597 11:46:16 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:45.597 11:46:16 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:45.597 11:46:16 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:45.597 11:46:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:45.597 11:46:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.597 11:46:16 -- common/autotest_common.sh@10 -- # set +x 00:22:46.166 nvme0n1 00:22:46.166 11:46:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.166 11:46:16 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.166 11:46:16 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:46.166 11:46:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.166 11:46:16 -- common/autotest_common.sh@10 -- # set +x 00:22:46.166 11:46:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.166 11:46:16 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.166 11:46:16 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.166 11:46:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.166 11:46:16 -- common/autotest_common.sh@10 -- # set +x 00:22:46.166 11:46:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.166 11:46:16 -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:46.166 11:46:16 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:46.166 11:46:16 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:46.166 11:46:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.166 11:46:16 -- host/auth.sh@44 -- # digest=sha512 00:22:46.166 11:46:16 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:46.166 11:46:16 -- host/auth.sh@44 -- # keyid=0 00:22:46.166 11:46:16 -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:46.166 11:46:16 -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:46.166 11:46:16 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:46.166 11:46:16 -- host/auth.sh@49 -- # echo ffdhe8192 00:22:46.166 11:46:16 -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc2NDAyYTQ1M2YzZmRkNGY3MWU3YTM5NTNiYmRiYmWu2X4P: 00:22:46.166 11:46:16 -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: ]] 00:22:46.166 11:46:16 -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhYzI4NGU5ODlmZmYzMWE3ZDYwNmU2MWQ1M2UyOTRhMDgzM2M3NGRjYzNiZTBkODA1MmRiNWY2NTQ0YjgyYWFmfUs=: 00:22:46.166 11:46:16 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 0 00:22:46.166 11:46:16 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:46.166 11:46:16 -- host/auth.sh@70 -- # digest=sha512 00:22:46.166 11:46:16 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:46.166 11:46:16 -- host/auth.sh@70 -- # keyid=0 00:22:46.166 11:46:16 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.166 11:46:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:46.166 11:46:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.166 11:46:16 -- common/autotest_common.sh@10 -- # set +x 00:22:46.166 11:46:16 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.166 11:46:16 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:46.166 11:46:16 -- nvmf/common.sh@717 -- # local ip 00:22:46.166 11:46:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:46.166 11:46:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:46.166 11:46:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.166 11:46:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.166 11:46:16 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:46.166 11:46:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:46.166 11:46:16 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:46.166 11:46:16 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:46.166 11:46:16 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:46.166 11:46:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.166 11:46:16 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.166 11:46:16 -- common/autotest_common.sh@10 -- # set +x 00:22:46.737 nvme0n1 00:22:46.737 11:46:17 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.737 11:46:17 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.737 11:46:17 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:46.737 11:46:17 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.737 11:46:17 -- common/autotest_common.sh@10 -- # set +x 00:22:46.737 11:46:17 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.737 11:46:17 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.737 11:46:17 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.737 11:46:17 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.737 11:46:17 -- common/autotest_common.sh@10 -- # set +x 00:22:46.737 11:46:17 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.737 11:46:17 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:46.737 11:46:17 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:46.737 11:46:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.737 11:46:17 -- host/auth.sh@44 -- # digest=sha512 00:22:46.737 11:46:17 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:46.737 11:46:17 -- host/auth.sh@44 -- # keyid=1 00:22:46.737 11:46:17 -- host/auth.sh@45 -- # key=DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:46.737 11:46:17 -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:46.737 11:46:17 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:46.737 11:46:17 -- host/auth.sh@49 -- # echo ffdhe8192 00:22:46.737 11:46:17 -- host/auth.sh@50 -- # echo DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:46.737 11:46:17 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: ]] 00:22:46.737 11:46:17 -- host/auth.sh@51 -- # echo DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:46.737 11:46:17 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 1 00:22:46.737 11:46:17 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:46.737 11:46:17 -- host/auth.sh@70 -- # digest=sha512 00:22:46.737 11:46:17 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:46.738 11:46:17 -- host/auth.sh@70 -- # keyid=1 00:22:46.738 11:46:17 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.738 11:46:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:46.738 11:46:17 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.738 11:46:17 -- common/autotest_common.sh@10 -- # set +x 00:22:46.738 11:46:17 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.738 11:46:17 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:46.738 11:46:17 -- nvmf/common.sh@717 -- # local ip 00:22:46.738 11:46:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:46.738 11:46:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:46.738 11:46:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.738 11:46:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.738 11:46:17 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:46.738 11:46:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:46.738 11:46:17 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:46.738 11:46:17 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:46.738 11:46:17 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:46.738 11:46:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.738 11:46:17 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.738 11:46:17 -- common/autotest_common.sh@10 -- # set +x 00:22:47.308 nvme0n1 00:22:47.308 11:46:17 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.308 11:46:17 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.308 11:46:17 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:47.308 11:46:17 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.308 11:46:17 -- common/autotest_common.sh@10 -- # set +x 00:22:47.308 11:46:17 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.308 11:46:18 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.308 11:46:18 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.308 11:46:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.308 11:46:18 -- common/autotest_common.sh@10 -- # set +x 00:22:47.308 11:46:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.308 11:46:18 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:47.308 11:46:18 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:47.308 11:46:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.308 11:46:18 -- host/auth.sh@44 -- # digest=sha512 00:22:47.308 11:46:18 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:47.308 11:46:18 -- host/auth.sh@44 -- # keyid=2 00:22:47.308 11:46:18 -- host/auth.sh@45 -- # key=DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:47.308 11:46:18 -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:47.308 11:46:18 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:47.308 11:46:18 -- host/auth.sh@49 -- # echo ffdhe8192 00:22:47.308 11:46:18 -- host/auth.sh@50 -- # echo DHHC-1:01:ZTU1MTUxZDJiYzMxMzQwMWE5NzM0NDY0ZTAzNzQ5ZmLrpa4Q: 00:22:47.308 11:46:18 -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: ]] 00:22:47.308 11:46:18 -- host/auth.sh@51 -- # echo DHHC-1:01:MWYwYTI3ODg0ZjU4ZDFjYjAxNDc2Y2YwZjFhZjRmNTW0J/8n: 00:22:47.308 11:46:18 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 2 00:22:47.308 11:46:18 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:47.308 11:46:18 -- host/auth.sh@70 -- # digest=sha512 00:22:47.308 11:46:18 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:47.308 11:46:18 -- host/auth.sh@70 -- # keyid=2 00:22:47.308 11:46:18 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.308 11:46:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:47.308 11:46:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.308 11:46:18 -- common/autotest_common.sh@10 -- # set +x 00:22:47.308 11:46:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.308 11:46:18 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:47.308 11:46:18 -- nvmf/common.sh@717 -- # local ip 00:22:47.308 11:46:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:47.308 11:46:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:47.308 11:46:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.308 11:46:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.308 11:46:18 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:47.308 11:46:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:47.308 11:46:18 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:47.308 11:46:18 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:47.308 11:46:18 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:47.308 11:46:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.309 11:46:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.309 11:46:18 -- common/autotest_common.sh@10 -- # set +x 00:22:47.876 nvme0n1 00:22:47.877 11:46:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.877 11:46:18 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.877 11:46:18 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:47.877 11:46:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.877 11:46:18 -- common/autotest_common.sh@10 -- # set +x 00:22:47.877 11:46:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.877 11:46:18 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.877 11:46:18 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.877 11:46:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.877 11:46:18 -- common/autotest_common.sh@10 -- # set +x 00:22:48.135 11:46:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.135 11:46:18 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:48.135 11:46:18 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:48.135 11:46:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.135 11:46:18 -- host/auth.sh@44 -- # digest=sha512 00:22:48.135 11:46:18 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:48.136 11:46:18 -- host/auth.sh@44 -- # keyid=3 00:22:48.136 11:46:18 -- host/auth.sh@45 -- # key=DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:48.136 11:46:18 -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:48.136 11:46:18 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:48.136 11:46:18 -- host/auth.sh@49 -- # echo ffdhe8192 00:22:48.136 11:46:18 -- host/auth.sh@50 -- # echo DHHC-1:02:M2UxYWI5ZjhiZDgyMmFkYzA5MGRjNDg3MTBjNDA5NzgyODZmZDZhMzJkNDVjNDY3kKIcSw==: 00:22:48.136 11:46:18 -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: ]] 00:22:48.136 11:46:18 -- host/auth.sh@51 -- # echo DHHC-1:00:ZGQzY2NkNDBlZTFkOTNhMTBjMTk3ZWMyMTkzNGU2ODYMoG71: 00:22:48.136 11:46:18 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 3 00:22:48.136 11:46:18 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:48.136 11:46:18 -- host/auth.sh@70 -- # digest=sha512 00:22:48.136 11:46:18 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:48.136 11:46:18 -- host/auth.sh@70 -- # keyid=3 00:22:48.136 11:46:18 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.136 11:46:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:48.136 11:46:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.136 11:46:18 -- common/autotest_common.sh@10 -- # set +x 00:22:48.136 11:46:18 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.136 11:46:18 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:48.136 11:46:18 -- nvmf/common.sh@717 -- # local ip 00:22:48.136 11:46:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:48.136 11:46:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:48.136 11:46:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.136 11:46:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.136 11:46:18 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:48.136 11:46:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:48.136 11:46:18 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:48.136 11:46:18 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:48.136 11:46:18 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:48.136 11:46:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:48.136 11:46:18 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.136 11:46:18 -- common/autotest_common.sh@10 -- # set +x 00:22:48.705 nvme0n1 00:22:48.705 11:46:19 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.705 11:46:19 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.705 11:46:19 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:48.705 11:46:19 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.705 11:46:19 -- common/autotest_common.sh@10 -- # set +x 00:22:48.705 11:46:19 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.705 11:46:19 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.705 11:46:19 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.705 11:46:19 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.705 11:46:19 -- common/autotest_common.sh@10 -- # set +x 00:22:48.705 11:46:19 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.705 11:46:19 -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:48.705 11:46:19 -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:48.705 11:46:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.705 11:46:19 -- host/auth.sh@44 -- # digest=sha512 00:22:48.705 11:46:19 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:48.705 11:46:19 -- host/auth.sh@44 -- # keyid=4 00:22:48.705 11:46:19 -- host/auth.sh@45 -- # key=DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:48.705 11:46:19 -- host/auth.sh@46 -- # ckey= 00:22:48.705 11:46:19 -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:48.705 11:46:19 -- host/auth.sh@49 -- # echo ffdhe8192 00:22:48.705 11:46:19 -- host/auth.sh@50 -- # echo DHHC-1:03:YTA5MWQ1MmMwYjc1NzRhNTZmM2UyZGI5NmQwY2Q3ZGUyZmVhMjk5OTIzODMzMDhiODRiMjliN2Q0MTQyNDgxMegGgZI=: 00:22:48.705 11:46:19 -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:48.705 11:46:19 -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 4 00:22:48.705 11:46:19 -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:48.705 11:46:19 -- host/auth.sh@70 -- # digest=sha512 00:22:48.705 11:46:19 -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:48.705 11:46:19 -- host/auth.sh@70 -- # keyid=4 00:22:48.705 11:46:19 -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.705 11:46:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:48.705 11:46:19 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.705 11:46:19 -- common/autotest_common.sh@10 -- # set +x 00:22:48.705 11:46:19 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.705 11:46:19 -- host/auth.sh@74 -- # get_main_ns_ip 00:22:48.705 11:46:19 -- nvmf/common.sh@717 -- # local ip 00:22:48.705 11:46:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:48.705 11:46:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:48.705 11:46:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.705 11:46:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.705 11:46:19 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:48.705 11:46:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:48.705 11:46:19 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:48.705 11:46:19 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:48.705 11:46:19 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:48.705 11:46:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:48.705 11:46:19 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.705 11:46:19 -- common/autotest_common.sh@10 -- # set +x 00:22:49.274 nvme0n1 00:22:49.274 11:46:19 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.274 11:46:19 -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.274 11:46:19 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.274 11:46:19 -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:49.274 11:46:19 -- common/autotest_common.sh@10 -- # set +x 00:22:49.274 11:46:19 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.274 11:46:19 -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.274 11:46:19 -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.274 11:46:19 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.274 11:46:19 -- common/autotest_common.sh@10 -- # set +x 00:22:49.274 11:46:19 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.274 11:46:19 -- host/auth.sh@123 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:49.274 11:46:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.274 11:46:19 -- host/auth.sh@44 -- # digest=sha256 00:22:49.274 11:46:19 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:49.274 11:46:19 -- host/auth.sh@44 -- # keyid=1 00:22:49.274 11:46:19 -- host/auth.sh@45 -- # key=DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:49.274 11:46:19 -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:49.274 11:46:19 -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:49.274 11:46:19 -- host/auth.sh@49 -- # echo ffdhe2048 00:22:49.274 11:46:19 -- host/auth.sh@50 -- # echo DHHC-1:00:OTI3OWEzMWMxYzJiMGQ4ZGRlMDc0MWY3YTIxNWYwOGY3MzI0OGNkOTJlNmIwMTFjb3wG0Q==: 00:22:49.274 11:46:19 -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: ]] 00:22:49.274 11:46:19 -- host/auth.sh@51 -- # echo DHHC-1:02:OWE5MGZjM2ZkODY3NzAyZTliZDYwZTI2OGY1YTgwNmNkODlkNjFmZmUxY2U0NzA5xhwRhA==: 00:22:49.274 11:46:19 -- host/auth.sh@124 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:49.274 11:46:19 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.274 11:46:19 -- common/autotest_common.sh@10 -- # set +x 00:22:49.274 11:46:19 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.274 11:46:19 -- host/auth.sh@125 -- # get_main_ns_ip 00:22:49.274 11:46:19 -- nvmf/common.sh@717 -- # local ip 00:22:49.274 11:46:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:49.274 11:46:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:49.274 11:46:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.274 11:46:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.274 11:46:19 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:49.274 11:46:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:49.274 11:46:19 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:49.274 11:46:19 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:49.274 11:46:19 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:49.274 11:46:19 -- host/auth.sh@125 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:49.274 11:46:19 -- common/autotest_common.sh@648 -- # local es=0 00:22:49.274 11:46:19 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:49.274 11:46:19 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:49.274 11:46:19 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.274 11:46:19 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:49.274 11:46:19 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.274 11:46:19 -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:49.274 11:46:19 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.274 11:46:19 -- common/autotest_common.sh@10 -- # set +x 00:22:49.274 request: 00:22:49.274 { 00:22:49.274 "name": "nvme0", 00:22:49.274 "trtype": "rdma", 00:22:49.274 "traddr": "192.168.100.8", 00:22:49.274 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:49.274 "adrfam": "ipv4", 00:22:49.274 "trsvcid": "4420", 00:22:49.274 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:49.274 "method": "bdev_nvme_attach_controller", 00:22:49.274 "req_id": 1 00:22:49.274 } 00:22:49.274 Got JSON-RPC error response 00:22:49.274 response: 00:22:49.274 { 00:22:49.274 "code": -32602, 00:22:49.274 "message": "Invalid parameters" 00:22:49.274 } 00:22:49.274 11:46:20 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:49.274 11:46:20 -- common/autotest_common.sh@651 -- # es=1 00:22:49.274 11:46:20 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:49.274 11:46:20 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:49.274 11:46:20 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:49.274 11:46:20 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.274 11:46:20 -- host/auth.sh@127 -- # jq length 00:22:49.274 11:46:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.274 11:46:20 -- common/autotest_common.sh@10 -- # set +x 00:22:49.533 11:46:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.533 11:46:20 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:22:49.533 11:46:20 -- host/auth.sh@130 -- # get_main_ns_ip 00:22:49.533 11:46:20 -- nvmf/common.sh@717 -- # local ip 00:22:49.533 11:46:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:49.533 11:46:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:49.533 11:46:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.533 11:46:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.533 11:46:20 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:49.533 11:46:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:49.533 11:46:20 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:49.533 11:46:20 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:49.533 11:46:20 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:49.533 11:46:20 -- host/auth.sh@130 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:49.533 11:46:20 -- common/autotest_common.sh@648 -- # local es=0 00:22:49.533 11:46:20 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:49.533 11:46:20 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:49.533 11:46:20 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.533 11:46:20 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:49.533 11:46:20 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.533 11:46:20 -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:49.533 11:46:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.533 11:46:20 -- common/autotest_common.sh@10 -- # set +x 00:22:49.533 request: 00:22:49.533 { 00:22:49.533 "name": "nvme0", 00:22:49.533 "trtype": "rdma", 00:22:49.533 "traddr": "192.168.100.8", 00:22:49.533 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:49.533 "adrfam": "ipv4", 00:22:49.533 "trsvcid": "4420", 00:22:49.533 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:49.533 "dhchap_key": "key2", 00:22:49.533 "method": "bdev_nvme_attach_controller", 00:22:49.533 "req_id": 1 00:22:49.533 } 00:22:49.533 Got JSON-RPC error response 00:22:49.533 response: 00:22:49.533 { 00:22:49.533 "code": -32602, 00:22:49.533 "message": "Invalid parameters" 00:22:49.533 } 00:22:49.533 11:46:20 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:49.533 11:46:20 -- common/autotest_common.sh@651 -- # es=1 00:22:49.533 11:46:20 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:49.533 11:46:20 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:49.533 11:46:20 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:49.533 11:46:20 -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.533 11:46:20 -- host/auth.sh@133 -- # jq length 00:22:49.533 11:46:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.533 11:46:20 -- common/autotest_common.sh@10 -- # set +x 00:22:49.533 11:46:20 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.533 11:46:20 -- host/auth.sh@133 -- # (( 0 == 0 )) 00:22:49.533 11:46:20 -- host/auth.sh@136 -- # get_main_ns_ip 00:22:49.533 11:46:20 -- nvmf/common.sh@717 -- # local ip 00:22:49.533 11:46:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:49.533 11:46:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:49.533 11:46:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.533 11:46:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.533 11:46:20 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:49.533 11:46:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:49.533 11:46:20 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:49.533 11:46:20 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:49.533 11:46:20 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:49.533 11:46:20 -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:49.533 11:46:20 -- common/autotest_common.sh@648 -- # local es=0 00:22:49.533 11:46:20 -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:49.533 11:46:20 -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:49.533 11:46:20 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.533 11:46:20 -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:49.533 11:46:20 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.533 11:46:20 -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:49.533 11:46:20 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.533 11:46:20 -- common/autotest_common.sh@10 -- # set +x 00:22:49.793 request: 00:22:49.793 { 00:22:49.793 "name": "nvme0", 00:22:49.793 "trtype": "rdma", 00:22:49.793 "traddr": "192.168.100.8", 00:22:49.793 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:49.793 "adrfam": "ipv4", 00:22:49.793 "trsvcid": "4420", 00:22:49.793 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:49.793 "dhchap_key": "key1", 00:22:49.793 "dhchap_ctrlr_key": "ckey2", 00:22:49.793 "method": "bdev_nvme_attach_controller", 00:22:49.793 "req_id": 1 00:22:49.793 } 00:22:49.793 Got JSON-RPC error response 00:22:49.793 response: 00:22:49.793 { 00:22:49.793 "code": -32602, 00:22:49.793 "message": "Invalid parameters" 00:22:49.793 } 00:22:49.793 11:46:20 -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:49.793 11:46:20 -- common/autotest_common.sh@651 -- # es=1 00:22:49.793 11:46:20 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:49.793 11:46:20 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:49.793 11:46:20 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:49.793 11:46:20 -- host/auth.sh@140 -- # trap - SIGINT SIGTERM EXIT 00:22:49.793 11:46:20 -- host/auth.sh@141 -- # cleanup 00:22:49.793 11:46:20 -- host/auth.sh@24 -- # nvmftestfini 00:22:49.793 11:46:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:49.793 11:46:20 -- nvmf/common.sh@117 -- # sync 00:22:49.793 11:46:20 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:49.793 11:46:20 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:49.793 11:46:20 -- nvmf/common.sh@120 -- # set +e 00:22:49.793 11:46:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:49.793 11:46:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:49.793 rmmod nvme_rdma 00:22:49.793 rmmod nvme_fabrics 00:22:49.793 11:46:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:49.793 11:46:20 -- nvmf/common.sh@124 -- # set -e 00:22:49.793 11:46:20 -- nvmf/common.sh@125 -- # return 0 00:22:49.793 11:46:20 -- nvmf/common.sh@478 -- # '[' -n 3110314 ']' 00:22:49.793 11:46:20 -- nvmf/common.sh@479 -- # killprocess 3110314 00:22:49.793 11:46:20 -- common/autotest_common.sh@946 -- # '[' -z 3110314 ']' 00:22:49.793 11:46:20 -- common/autotest_common.sh@950 -- # kill -0 3110314 00:22:49.793 11:46:20 -- common/autotest_common.sh@951 -- # uname 00:22:49.793 11:46:20 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:49.793 11:46:20 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3110314 00:22:49.793 11:46:20 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:49.793 11:46:20 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:49.793 11:46:20 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3110314' 00:22:49.793 killing process with pid 3110314 00:22:49.793 11:46:20 -- common/autotest_common.sh@965 -- # kill 3110314 00:22:49.793 11:46:20 -- common/autotest_common.sh@970 -- # wait 3110314 00:22:50.052 11:46:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:50.052 11:46:20 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:22:50.052 11:46:20 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:50.052 11:46:20 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:50.052 11:46:20 -- host/auth.sh@27 -- # clean_kernel_target 00:22:50.052 11:46:20 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:22:50.052 11:46:20 -- nvmf/common.sh@675 -- # echo 0 00:22:50.052 11:46:20 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:50.052 11:46:20 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:50.052 11:46:20 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:50.052 11:46:20 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:50.052 11:46:20 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:22:50.052 11:46:20 -- nvmf/common.sh@684 -- # modprobe -r nvmet_rdma nvmet 00:22:50.052 11:46:20 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:22:53.343 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:53.343 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:53.343 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:53.343 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:53.343 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:53.343 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:53.343 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:53.343 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:53.343 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:53.343 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:53.343 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:53.343 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:53.343 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:53.343 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:53.343 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:53.343 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:58.664 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:22:58.664 11:46:28 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.x0v /tmp/spdk.key-null.Ydb /tmp/spdk.key-sha256.XNG /tmp/spdk.key-sha384.oMz /tmp/spdk.key-sha512.Rc5 /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:22:58.664 11:46:28 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:01.202 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:23:01.202 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:01.202 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:23:01.202 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:23:01.202 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:23:01.202 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:23:01.202 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:23:01.202 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:23:01.202 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:23:01.202 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:23:01.202 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:23:01.202 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:23:01.202 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:23:01.202 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:23:01.202 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:23:01.202 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:23:01.202 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:23:01.202 00:23:01.202 real 0m55.599s 00:23:01.202 user 0m41.095s 00:23:01.202 sys 0m14.918s 00:23:01.202 11:46:31 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:01.202 11:46:31 -- common/autotest_common.sh@10 -- # set +x 00:23:01.202 ************************************ 00:23:01.202 END TEST nvmf_auth 00:23:01.202 ************************************ 00:23:01.460 11:46:31 -- nvmf/nvmf.sh@105 -- # [[ rdma == \t\c\p ]] 00:23:01.460 11:46:31 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:23:01.460 11:46:31 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:23:01.460 11:46:31 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:23:01.460 11:46:31 -- nvmf/nvmf.sh@120 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:01.460 11:46:31 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:01.460 11:46:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:01.460 11:46:31 -- common/autotest_common.sh@10 -- # set +x 00:23:01.460 ************************************ 00:23:01.460 START TEST nvmf_bdevperf 00:23:01.460 ************************************ 00:23:01.460 11:46:32 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:01.460 * Looking for test storage... 00:23:01.460 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:01.460 11:46:32 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.460 11:46:32 -- nvmf/common.sh@7 -- # uname -s 00:23:01.460 11:46:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.460 11:46:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.460 11:46:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.460 11:46:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.460 11:46:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.460 11:46:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.461 11:46:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.461 11:46:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.461 11:46:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.461 11:46:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.461 11:46:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:01.461 11:46:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:23:01.461 11:46:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.461 11:46:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.461 11:46:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.461 11:46:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.461 11:46:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:01.461 11:46:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.461 11:46:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.461 11:46:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.461 11:46:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.461 11:46:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.461 11:46:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.461 11:46:32 -- paths/export.sh@5 -- # export PATH 00:23:01.461 11:46:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.461 11:46:32 -- nvmf/common.sh@47 -- # : 0 00:23:01.461 11:46:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:01.461 11:46:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:01.461 11:46:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.461 11:46:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.461 11:46:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.461 11:46:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:01.461 11:46:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:01.461 11:46:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:01.461 11:46:32 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:01.461 11:46:32 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:01.461 11:46:32 -- host/bdevperf.sh@24 -- # nvmftestinit 00:23:01.461 11:46:32 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:23:01.461 11:46:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.461 11:46:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:01.461 11:46:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:01.461 11:46:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:01.461 11:46:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.461 11:46:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:01.461 11:46:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.461 11:46:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:01.461 11:46:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:01.461 11:46:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:01.461 11:46:32 -- common/autotest_common.sh@10 -- # set +x 00:23:08.030 11:46:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:08.030 11:46:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:08.030 11:46:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:08.030 11:46:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:08.030 11:46:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:08.030 11:46:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:08.030 11:46:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:08.030 11:46:38 -- nvmf/common.sh@295 -- # net_devs=() 00:23:08.030 11:46:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:08.030 11:46:38 -- nvmf/common.sh@296 -- # e810=() 00:23:08.030 11:46:38 -- nvmf/common.sh@296 -- # local -ga e810 00:23:08.030 11:46:38 -- nvmf/common.sh@297 -- # x722=() 00:23:08.030 11:46:38 -- nvmf/common.sh@297 -- # local -ga x722 00:23:08.030 11:46:38 -- nvmf/common.sh@298 -- # mlx=() 00:23:08.030 11:46:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:08.030 11:46:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.030 11:46:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.030 11:46:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.030 11:46:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.030 11:46:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.030 11:46:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.030 11:46:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.030 11:46:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.030 11:46:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.030 11:46:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.030 11:46:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.030 11:46:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:08.030 11:46:38 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:08.030 11:46:38 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:08.030 11:46:38 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:08.030 11:46:38 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:08.030 11:46:38 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:08.030 11:46:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:08.030 11:46:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.031 11:46:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:23:08.031 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:23:08.031 11:46:38 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:08.031 11:46:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.031 11:46:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:23:08.031 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:23:08.031 11:46:38 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:08.031 11:46:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:08.031 11:46:38 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.031 11:46:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.031 11:46:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:08.031 11:46:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.031 11:46:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:08.031 Found net devices under 0000:18:00.0: mlx_0_0 00:23:08.031 11:46:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.031 11:46:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.031 11:46:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.031 11:46:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:08.031 11:46:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.031 11:46:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:08.031 Found net devices under 0000:18:00.1: mlx_0_1 00:23:08.031 11:46:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.031 11:46:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:08.031 11:46:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:08.031 11:46:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@409 -- # rdma_device_init 00:23:08.031 11:46:38 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:23:08.031 11:46:38 -- nvmf/common.sh@58 -- # uname 00:23:08.031 11:46:38 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:08.031 11:46:38 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:08.031 11:46:38 -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:08.031 11:46:38 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:08.031 11:46:38 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:08.031 11:46:38 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:08.031 11:46:38 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:08.031 11:46:38 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:08.031 11:46:38 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:23:08.031 11:46:38 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:08.031 11:46:38 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:08.031 11:46:38 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:08.031 11:46:38 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:08.031 11:46:38 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:08.031 11:46:38 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:08.031 11:46:38 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:08.031 11:46:38 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:08.031 11:46:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:08.031 11:46:38 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:08.031 11:46:38 -- nvmf/common.sh@105 -- # continue 2 00:23:08.031 11:46:38 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:08.031 11:46:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:08.031 11:46:38 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:08.031 11:46:38 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:08.031 11:46:38 -- nvmf/common.sh@105 -- # continue 2 00:23:08.031 11:46:38 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:08.031 11:46:38 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:08.031 11:46:38 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:08.031 11:46:38 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:08.031 11:46:38 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:08.031 11:46:38 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:08.031 11:46:38 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:08.031 11:46:38 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:08.031 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:08.031 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:23:08.031 altname enp24s0f0np0 00:23:08.031 altname ens785f0np0 00:23:08.031 inet 192.168.100.8/24 scope global mlx_0_0 00:23:08.031 valid_lft forever preferred_lft forever 00:23:08.031 11:46:38 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:08.031 11:46:38 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:08.031 11:46:38 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:08.031 11:46:38 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:08.031 11:46:38 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:08.031 11:46:38 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:08.031 11:46:38 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:08.031 11:46:38 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:08.031 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:08.031 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:23:08.031 altname enp24s0f1np1 00:23:08.031 altname ens785f1np1 00:23:08.031 inet 192.168.100.9/24 scope global mlx_0_1 00:23:08.031 valid_lft forever preferred_lft forever 00:23:08.031 11:46:38 -- nvmf/common.sh@411 -- # return 0 00:23:08.031 11:46:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:08.031 11:46:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:08.031 11:46:38 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:23:08.031 11:46:38 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:08.031 11:46:38 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:08.031 11:46:38 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:08.031 11:46:38 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:08.031 11:46:38 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:08.031 11:46:38 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:08.031 11:46:38 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:08.031 11:46:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:08.031 11:46:38 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:08.031 11:46:38 -- nvmf/common.sh@105 -- # continue 2 00:23:08.031 11:46:38 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:08.031 11:46:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:08.031 11:46:38 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:08.031 11:46:38 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:08.031 11:46:38 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:08.031 11:46:38 -- nvmf/common.sh@105 -- # continue 2 00:23:08.031 11:46:38 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:08.031 11:46:38 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:08.031 11:46:38 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:08.031 11:46:38 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:08.031 11:46:38 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:08.031 11:46:38 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:08.031 11:46:38 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:08.031 11:46:38 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:08.031 11:46:38 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:08.031 11:46:38 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:08.031 11:46:38 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:08.031 11:46:38 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:08.031 11:46:38 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:23:08.031 192.168.100.9' 00:23:08.031 11:46:38 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:08.031 192.168.100.9' 00:23:08.031 11:46:38 -- nvmf/common.sh@446 -- # head -n 1 00:23:08.031 11:46:38 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:08.031 11:46:38 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:23:08.031 192.168.100.9' 00:23:08.031 11:46:38 -- nvmf/common.sh@447 -- # tail -n +2 00:23:08.031 11:46:38 -- nvmf/common.sh@447 -- # head -n 1 00:23:08.031 11:46:38 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:08.031 11:46:38 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:23:08.031 11:46:38 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:08.031 11:46:38 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:23:08.031 11:46:38 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:23:08.031 11:46:38 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:23:08.031 11:46:38 -- host/bdevperf.sh@25 -- # tgt_init 00:23:08.031 11:46:38 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:08.031 11:46:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:08.031 11:46:38 -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:08.031 11:46:38 -- common/autotest_common.sh@10 -- # set +x 00:23:08.031 11:46:38 -- nvmf/common.sh@470 -- # nvmfpid=3121706 00:23:08.032 11:46:38 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:08.032 11:46:38 -- nvmf/common.sh@471 -- # waitforlisten 3121706 00:23:08.032 11:46:38 -- common/autotest_common.sh@827 -- # '[' -z 3121706 ']' 00:23:08.032 11:46:38 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.032 11:46:38 -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:08.032 11:46:38 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.032 11:46:38 -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:08.032 11:46:38 -- common/autotest_common.sh@10 -- # set +x 00:23:08.032 [2024-05-15 11:46:38.580731] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:23:08.032 [2024-05-15 11:46:38.580798] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.032 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.032 [2024-05-15 11:46:38.651964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:08.032 [2024-05-15 11:46:38.739851] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.032 [2024-05-15 11:46:38.739894] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.032 [2024-05-15 11:46:38.739904] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.032 [2024-05-15 11:46:38.739913] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.032 [2024-05-15 11:46:38.739920] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.032 [2024-05-15 11:46:38.739970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.032 [2024-05-15 11:46:38.740031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:08.032 [2024-05-15 11:46:38.740033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.968 11:46:39 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:08.968 11:46:39 -- common/autotest_common.sh@860 -- # return 0 00:23:08.968 11:46:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:08.968 11:46:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:08.968 11:46:39 -- common/autotest_common.sh@10 -- # set +x 00:23:08.968 11:46:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.968 11:46:39 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:08.968 11:46:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.968 11:46:39 -- common/autotest_common.sh@10 -- # set +x 00:23:08.968 [2024-05-15 11:46:39.465757] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13af700/0x13b3bf0) succeed. 00:23:08.968 [2024-05-15 11:46:39.476396] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13b0ca0/0x13f5280) succeed. 00:23:08.968 11:46:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.968 11:46:39 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:08.968 11:46:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.968 11:46:39 -- common/autotest_common.sh@10 -- # set +x 00:23:08.968 Malloc0 00:23:08.968 11:46:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.968 11:46:39 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:08.968 11:46:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.968 11:46:39 -- common/autotest_common.sh@10 -- # set +x 00:23:08.968 11:46:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.968 11:46:39 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:08.968 11:46:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.969 11:46:39 -- common/autotest_common.sh@10 -- # set +x 00:23:08.969 11:46:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.969 11:46:39 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:08.969 11:46:39 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.969 11:46:39 -- common/autotest_common.sh@10 -- # set +x 00:23:08.969 [2024-05-15 11:46:39.635980] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:08.969 [2024-05-15 11:46:39.636348] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:08.969 11:46:39 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.969 11:46:39 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:23:08.969 11:46:39 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:23:08.969 11:46:39 -- nvmf/common.sh@521 -- # config=() 00:23:08.969 11:46:39 -- nvmf/common.sh@521 -- # local subsystem config 00:23:08.969 11:46:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:08.969 11:46:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:08.969 { 00:23:08.969 "params": { 00:23:08.969 "name": "Nvme$subsystem", 00:23:08.969 "trtype": "$TEST_TRANSPORT", 00:23:08.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.969 "adrfam": "ipv4", 00:23:08.969 "trsvcid": "$NVMF_PORT", 00:23:08.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.969 "hdgst": ${hdgst:-false}, 00:23:08.969 "ddgst": ${ddgst:-false} 00:23:08.969 }, 00:23:08.969 "method": "bdev_nvme_attach_controller" 00:23:08.969 } 00:23:08.969 EOF 00:23:08.969 )") 00:23:08.969 11:46:39 -- nvmf/common.sh@543 -- # cat 00:23:08.969 11:46:39 -- nvmf/common.sh@545 -- # jq . 00:23:08.969 11:46:39 -- nvmf/common.sh@546 -- # IFS=, 00:23:08.969 11:46:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:08.969 "params": { 00:23:08.969 "name": "Nvme1", 00:23:08.969 "trtype": "rdma", 00:23:08.969 "traddr": "192.168.100.8", 00:23:08.969 "adrfam": "ipv4", 00:23:08.969 "trsvcid": "4420", 00:23:08.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.969 "hdgst": false, 00:23:08.969 "ddgst": false 00:23:08.969 }, 00:23:08.969 "method": "bdev_nvme_attach_controller" 00:23:08.969 }' 00:23:08.969 [2024-05-15 11:46:39.689093] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:23:08.969 [2024-05-15 11:46:39.689157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3121870 ] 00:23:08.969 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.324 [2024-05-15 11:46:39.762548] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.324 [2024-05-15 11:46:39.848394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.324 Running I/O for 1 seconds... 00:23:10.702 00:23:10.702 Latency(us) 00:23:10.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.702 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:10.702 Verification LBA range: start 0x0 length 0x4000 00:23:10.702 Nvme1n1 : 1.00 18265.15 71.35 0.00 0.00 6963.98 983.04 10485.76 00:23:10.702 =================================================================================================================== 00:23:10.702 Total : 18265.15 71.35 0.00 0.00 6963.98 983.04 10485.76 00:23:10.702 11:46:41 -- host/bdevperf.sh@30 -- # bdevperfpid=3122086 00:23:10.702 11:46:41 -- host/bdevperf.sh@32 -- # sleep 3 00:23:10.702 11:46:41 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:23:10.702 11:46:41 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:23:10.702 11:46:41 -- nvmf/common.sh@521 -- # config=() 00:23:10.702 11:46:41 -- nvmf/common.sh@521 -- # local subsystem config 00:23:10.702 11:46:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:10.702 11:46:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:10.702 { 00:23:10.702 "params": { 00:23:10.702 "name": "Nvme$subsystem", 00:23:10.702 "trtype": "$TEST_TRANSPORT", 00:23:10.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.702 "adrfam": "ipv4", 00:23:10.702 "trsvcid": "$NVMF_PORT", 00:23:10.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.702 "hdgst": ${hdgst:-false}, 00:23:10.702 "ddgst": ${ddgst:-false} 00:23:10.702 }, 00:23:10.702 "method": "bdev_nvme_attach_controller" 00:23:10.702 } 00:23:10.702 EOF 00:23:10.702 )") 00:23:10.702 11:46:41 -- nvmf/common.sh@543 -- # cat 00:23:10.702 11:46:41 -- nvmf/common.sh@545 -- # jq . 00:23:10.702 11:46:41 -- nvmf/common.sh@546 -- # IFS=, 00:23:10.702 11:46:41 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:10.702 "params": { 00:23:10.702 "name": "Nvme1", 00:23:10.702 "trtype": "rdma", 00:23:10.702 "traddr": "192.168.100.8", 00:23:10.702 "adrfam": "ipv4", 00:23:10.702 "trsvcid": "4420", 00:23:10.702 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.702 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.702 "hdgst": false, 00:23:10.702 "ddgst": false 00:23:10.702 }, 00:23:10.702 "method": "bdev_nvme_attach_controller" 00:23:10.702 }' 00:23:10.702 [2024-05-15 11:46:41.350161] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:23:10.702 [2024-05-15 11:46:41.350228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3122086 ] 00:23:10.702 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.702 [2024-05-15 11:46:41.425312] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.961 [2024-05-15 11:46:41.508462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.961 Running I/O for 15 seconds... 00:23:14.248 11:46:44 -- host/bdevperf.sh@33 -- # kill -9 3121706 00:23:14.248 11:46:44 -- host/bdevperf.sh@35 -- # sleep 3 00:23:14.818 [2024-05-15 11:46:45.339906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.818 [2024-05-15 11:46:45.339949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.818 [2024-05-15 11:46:45.339987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:119832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.818 [2024-05-15 11:46:45.339997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.818 [2024-05-15 11:46:45.340009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.818 [2024-05-15 11:46:45.340019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.818 [2024-05-15 11:46:45.340031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:119848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.818 [2024-05-15 11:46:45.340040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.818 [2024-05-15 11:46:45.340051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:119856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.818 [2024-05-15 11:46:45.340063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.818 [2024-05-15 11:46:45.340074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.818 [2024-05-15 11:46:45.340083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.818 [2024-05-15 11:46:45.340099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.818 [2024-05-15 11:46:45.340108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.818 [2024-05-15 11:46:45.340118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.818 [2024-05-15 11:46:45.340127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.818 [2024-05-15 11:46:45.340138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.818 [2024-05-15 11:46:45.340147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.818 [2024-05-15 11:46:45.340158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:119896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.818 [2024-05-15 11:46:45.340167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.818 [2024-05-15 11:46:45.340178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:119904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.818 [2024-05-15 11:46:45.340187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.818 [2024-05-15 11:46:45.340197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.818 [2024-05-15 11:46:45.340206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.818 [2024-05-15 11:46:45.340218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.818 [2024-05-15 11:46:45.340227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.818 [2024-05-15 11:46:45.340238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.818 [2024-05-15 11:46:45.340247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.818 [2024-05-15 11:46:45.340258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:119936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.818 [2024-05-15 11:46:45.340267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:119960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.340983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.340994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.341002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.341013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.341022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.341033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.341042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.341052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.341065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.819 [2024-05-15 11:46:45.341076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.819 [2024-05-15 11:46:45.341085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.820 [2024-05-15 11:46:45.341885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.820 [2024-05-15 11:46:45.341894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.341905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.341915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.341925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.341934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.341945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.341953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.341964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.341973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.341983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.341992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.821 [2024-05-15 11:46:45.342472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.342482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x182700 00:23:14.821 [2024-05-15 11:46:45.342492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.344248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.821 [2024-05-15 11:46:45.344261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.821 [2024-05-15 11:46:45.344270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119816 len:8 PRP1 0x0 PRP2 0x0 00:23:14.821 [2024-05-15 11:46:45.344279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.821 [2024-05-15 11:46:45.344323] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:23:14.821 [2024-05-15 11:46:45.347025] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:14.821 [2024-05-15 11:46:45.361240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:14.821 [2024-05-15 11:46:45.364793] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:14.821 [2024-05-15 11:46:45.364814] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:14.821 [2024-05-15 11:46:45.364836] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:15.758 [2024-05-15 11:46:46.368955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:15.758 [2024-05-15 11:46:46.369026] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:15.758 [2024-05-15 11:46:46.369633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:15.758 [2024-05-15 11:46:46.369669] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:15.758 [2024-05-15 11:46:46.369701] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:15.758 [2024-05-15 11:46:46.371265] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:15.758 [2024-05-15 11:46:46.372649] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:15.758 [2024-05-15 11:46:46.384068] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:15.758 [2024-05-15 11:46:46.386902] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:15.758 [2024-05-15 11:46:46.386929] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:15.758 [2024-05-15 11:46:46.386941] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:16.694 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3121706 Killed "${NVMF_APP[@]}" "$@" 00:23:16.694 11:46:47 -- host/bdevperf.sh@36 -- # tgt_init 00:23:16.694 11:46:47 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:16.694 11:46:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:16.694 11:46:47 -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:16.694 11:46:47 -- common/autotest_common.sh@10 -- # set +x 00:23:16.694 11:46:47 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:16.694 11:46:47 -- nvmf/common.sh@470 -- # nvmfpid=3122962 00:23:16.694 11:46:47 -- nvmf/common.sh@471 -- # waitforlisten 3122962 00:23:16.695 11:46:47 -- common/autotest_common.sh@827 -- # '[' -z 3122962 ']' 00:23:16.695 11:46:47 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.695 11:46:47 -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:16.695 11:46:47 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.695 11:46:47 -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:16.695 11:46:47 -- common/autotest_common.sh@10 -- # set +x 00:23:16.695 [2024-05-15 11:46:47.356065] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:23:16.695 [2024-05-15 11:46:47.356120] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.695 [2024-05-15 11:46:47.390739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:16.695 [2024-05-15 11:46:47.390765] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:16.695 [2024-05-15 11:46:47.390944] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:16.695 [2024-05-15 11:46:47.390955] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:16.695 [2024-05-15 11:46:47.390965] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:16.695 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.695 [2024-05-15 11:46:47.393734] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.695 [2024-05-15 11:46:47.397337] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:16.695 [2024-05-15 11:46:47.399831] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:16.695 [2024-05-15 11:46:47.399852] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:16.695 [2024-05-15 11:46:47.399861] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:16.695 [2024-05-15 11:46:47.430152] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:16.954 [2024-05-15 11:46:47.511457] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.954 [2024-05-15 11:46:47.511497] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.954 [2024-05-15 11:46:47.511507] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.954 [2024-05-15 11:46:47.511516] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.954 [2024-05-15 11:46:47.511523] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.954 [2024-05-15 11:46:47.511563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.954 [2024-05-15 11:46:47.511638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:16.954 [2024-05-15 11:46:47.511640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.522 11:46:48 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:17.522 11:46:48 -- common/autotest_common.sh@860 -- # return 0 00:23:17.522 11:46:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:17.522 11:46:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.522 11:46:48 -- common/autotest_common.sh@10 -- # set +x 00:23:17.522 11:46:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.522 11:46:48 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:17.522 11:46:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.522 11:46:48 -- common/autotest_common.sh@10 -- # set +x 00:23:17.522 [2024-05-15 11:46:48.261325] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x820700/0x824bf0) succeed. 00:23:17.522 [2024-05-15 11:46:48.272077] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x821ca0/0x866280) succeed. 00:23:17.782 11:46:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.782 11:46:48 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:17.782 11:46:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.782 11:46:48 -- common/autotest_common.sh@10 -- # set +x 00:23:17.782 Malloc0 00:23:17.782 11:46:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.782 11:46:48 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:17.782 [2024-05-15 11:46:48.403878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:17.782 [2024-05-15 11:46:48.403917] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:17.782 11:46:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.782 [2024-05-15 11:46:48.404103] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:17.782 [2024-05-15 11:46:48.404115] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:17.782 [2024-05-15 11:46:48.404126] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:17.782 11:46:48 -- common/autotest_common.sh@10 -- # set +x 00:23:17.782 [2024-05-15 11:46:48.405126] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:17.782 [2024-05-15 11:46:48.406913] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:17.782 11:46:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.782 11:46:48 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:17.782 11:46:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.782 11:46:48 -- common/autotest_common.sh@10 -- # set +x 00:23:17.782 [2024-05-15 11:46:48.418202] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:17.782 [2024-05-15 11:46:48.420707] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:17.782 [2024-05-15 11:46:48.420728] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:17.782 [2024-05-15 11:46:48.420739] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:17.782 11:46:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.782 11:46:48 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:17.782 11:46:48 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.782 11:46:48 -- common/autotest_common.sh@10 -- # set +x 00:23:17.782 [2024-05-15 11:46:48.426539] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:17.782 [2024-05-15 11:46:48.426852] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:17.782 11:46:48 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.782 11:46:48 -- host/bdevperf.sh@38 -- # wait 3122086 00:23:18.719 [2024-05-15 11:46:49.424656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:18.719 [2024-05-15 11:46:49.424678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:18.719 [2024-05-15 11:46:49.424856] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:18.719 [2024-05-15 11:46:49.424867] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:18.719 [2024-05-15 11:46:49.424876] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:18.719 [2024-05-15 11:46:49.424890] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:18.719 [2024-05-15 11:46:49.427649] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.719 [2024-05-15 11:46:49.437906] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:18.977 [2024-05-15 11:46:49.483512] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:27.104 00:23:27.104 Latency(us) 00:23:27.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.104 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:27.104 Verification LBA range: start 0x0 length 0x4000 00:23:27.104 Nvme1n1 : 15.01 12092.89 47.24 13553.12 0.00 4971.03 363.30 1035810.73 00:23:27.104 =================================================================================================================== 00:23:27.104 Total : 12092.89 47.24 13553.12 0.00 4971.03 363.30 1035810.73 00:23:27.104 11:46:56 -- host/bdevperf.sh@39 -- # sync 00:23:27.104 11:46:56 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:27.104 11:46:56 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.104 11:46:56 -- common/autotest_common.sh@10 -- # set +x 00:23:27.104 11:46:56 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.104 11:46:56 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:23:27.104 11:46:56 -- host/bdevperf.sh@44 -- # nvmftestfini 00:23:27.104 11:46:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:27.104 11:46:56 -- nvmf/common.sh@117 -- # sync 00:23:27.104 11:46:56 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:27.104 11:46:56 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:27.104 11:46:56 -- nvmf/common.sh@120 -- # set +e 00:23:27.104 11:46:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:27.104 11:46:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:27.104 rmmod nvme_rdma 00:23:27.104 rmmod nvme_fabrics 00:23:27.104 11:46:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:27.104 11:46:57 -- nvmf/common.sh@124 -- # set -e 00:23:27.104 11:46:57 -- nvmf/common.sh@125 -- # return 0 00:23:27.104 11:46:57 -- nvmf/common.sh@478 -- # '[' -n 3122962 ']' 00:23:27.104 11:46:57 -- nvmf/common.sh@479 -- # killprocess 3122962 00:23:27.104 11:46:57 -- common/autotest_common.sh@946 -- # '[' -z 3122962 ']' 00:23:27.104 11:46:57 -- common/autotest_common.sh@950 -- # kill -0 3122962 00:23:27.104 11:46:57 -- common/autotest_common.sh@951 -- # uname 00:23:27.104 11:46:57 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:27.104 11:46:57 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3122962 00:23:27.104 11:46:57 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:27.104 11:46:57 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:27.104 11:46:57 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3122962' 00:23:27.104 killing process with pid 3122962 00:23:27.104 11:46:57 -- common/autotest_common.sh@965 -- # kill 3122962 00:23:27.104 [2024-05-15 11:46:57.100603] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:27.104 11:46:57 -- common/autotest_common.sh@970 -- # wait 3122962 00:23:27.104 [2024-05-15 11:46:57.172908] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:23:27.104 11:46:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:27.104 11:46:57 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:23:27.104 00:23:27.104 real 0m25.370s 00:23:27.104 user 1m5.018s 00:23:27.104 sys 0m6.170s 00:23:27.104 11:46:57 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:27.104 11:46:57 -- common/autotest_common.sh@10 -- # set +x 00:23:27.104 ************************************ 00:23:27.104 END TEST nvmf_bdevperf 00:23:27.104 ************************************ 00:23:27.104 11:46:57 -- nvmf/nvmf.sh@121 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:23:27.104 11:46:57 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:27.104 11:46:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:27.104 11:46:57 -- common/autotest_common.sh@10 -- # set +x 00:23:27.104 ************************************ 00:23:27.104 START TEST nvmf_target_disconnect 00:23:27.104 ************************************ 00:23:27.104 11:46:57 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:23:27.104 * Looking for test storage... 00:23:27.104 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:27.104 11:46:57 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.104 11:46:57 -- nvmf/common.sh@7 -- # uname -s 00:23:27.104 11:46:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.104 11:46:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.104 11:46:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.104 11:46:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.104 11:46:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.104 11:46:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.104 11:46:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.104 11:46:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.104 11:46:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.104 11:46:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.104 11:46:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:27.104 11:46:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:23:27.104 11:46:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.104 11:46:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.104 11:46:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.104 11:46:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.104 11:46:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:27.104 11:46:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.104 11:46:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.105 11:46:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.105 11:46:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.105 11:46:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.105 11:46:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.105 11:46:57 -- paths/export.sh@5 -- # export PATH 00:23:27.105 11:46:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.105 11:46:57 -- nvmf/common.sh@47 -- # : 0 00:23:27.105 11:46:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:27.105 11:46:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:27.105 11:46:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.105 11:46:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.105 11:46:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.105 11:46:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:27.105 11:46:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:27.105 11:46:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:27.105 11:46:57 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:23:27.105 11:46:57 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:23:27.105 11:46:57 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:23:27.105 11:46:57 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:23:27.105 11:46:57 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:23:27.105 11:46:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.105 11:46:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:27.105 11:46:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:27.105 11:46:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:27.105 11:46:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.105 11:46:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.105 11:46:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.105 11:46:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:27.105 11:46:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:27.105 11:46:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:27.105 11:46:57 -- common/autotest_common.sh@10 -- # set +x 00:23:32.379 11:47:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:32.379 11:47:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:32.379 11:47:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:32.379 11:47:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:32.379 11:47:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:32.379 11:47:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:32.379 11:47:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:32.379 11:47:02 -- nvmf/common.sh@295 -- # net_devs=() 00:23:32.379 11:47:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:32.379 11:47:02 -- nvmf/common.sh@296 -- # e810=() 00:23:32.379 11:47:02 -- nvmf/common.sh@296 -- # local -ga e810 00:23:32.379 11:47:02 -- nvmf/common.sh@297 -- # x722=() 00:23:32.379 11:47:02 -- nvmf/common.sh@297 -- # local -ga x722 00:23:32.379 11:47:02 -- nvmf/common.sh@298 -- # mlx=() 00:23:32.379 11:47:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:32.379 11:47:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.379 11:47:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.379 11:47:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.379 11:47:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.379 11:47:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.379 11:47:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.379 11:47:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.379 11:47:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.379 11:47:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.379 11:47:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.379 11:47:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.379 11:47:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:32.379 11:47:02 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:32.379 11:47:02 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:32.379 11:47:02 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:32.379 11:47:02 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:32.379 11:47:02 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:32.379 11:47:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:32.379 11:47:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.379 11:47:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:23:32.379 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:23:32.379 11:47:02 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:32.379 11:47:02 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:32.379 11:47:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:32.379 11:47:02 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:32.379 11:47:02 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:32.379 11:47:02 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:32.379 11:47:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.379 11:47:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:23:32.379 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:23:32.379 11:47:02 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:32.380 11:47:02 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:32.380 11:47:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:32.380 11:47:02 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:32.380 11:47:02 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:32.380 11:47:02 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:32.380 11:47:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:32.380 11:47:02 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:32.380 11:47:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.380 11:47:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.380 11:47:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:32.380 11:47:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.380 11:47:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:32.380 Found net devices under 0000:18:00.0: mlx_0_0 00:23:32.380 11:47:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.380 11:47:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.380 11:47:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.380 11:47:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:32.380 11:47:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.380 11:47:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:32.380 Found net devices under 0000:18:00.1: mlx_0_1 00:23:32.380 11:47:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.380 11:47:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:32.380 11:47:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:32.380 11:47:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:32.380 11:47:02 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:23:32.380 11:47:02 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:23:32.380 11:47:02 -- nvmf/common.sh@409 -- # rdma_device_init 00:23:32.380 11:47:02 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:23:32.380 11:47:02 -- nvmf/common.sh@58 -- # uname 00:23:32.380 11:47:02 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:32.380 11:47:02 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:32.380 11:47:02 -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:32.380 11:47:02 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:32.380 11:47:02 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:32.380 11:47:02 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:32.380 11:47:02 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:32.380 11:47:02 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:32.380 11:47:02 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:23:32.380 11:47:02 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:32.380 11:47:02 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:32.380 11:47:02 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:32.380 11:47:02 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:32.380 11:47:02 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:32.380 11:47:02 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:32.380 11:47:02 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:32.380 11:47:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:32.380 11:47:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.380 11:47:02 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:32.380 11:47:02 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:32.380 11:47:02 -- nvmf/common.sh@105 -- # continue 2 00:23:32.380 11:47:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:32.380 11:47:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.380 11:47:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:32.380 11:47:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.380 11:47:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:32.380 11:47:02 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:32.380 11:47:02 -- nvmf/common.sh@105 -- # continue 2 00:23:32.380 11:47:02 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:32.380 11:47:02 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:32.380 11:47:02 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:32.380 11:47:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:32.380 11:47:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:32.380 11:47:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:32.380 11:47:03 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:32.380 11:47:03 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:32.380 11:47:03 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:32.380 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:32.380 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:23:32.380 altname enp24s0f0np0 00:23:32.380 altname ens785f0np0 00:23:32.380 inet 192.168.100.8/24 scope global mlx_0_0 00:23:32.380 valid_lft forever preferred_lft forever 00:23:32.380 11:47:03 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:32.380 11:47:03 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:32.380 11:47:03 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:32.380 11:47:03 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:32.380 11:47:03 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:32.380 11:47:03 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:32.380 11:47:03 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:32.380 11:47:03 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:32.380 11:47:03 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:32.380 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:32.380 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:23:32.380 altname enp24s0f1np1 00:23:32.380 altname ens785f1np1 00:23:32.380 inet 192.168.100.9/24 scope global mlx_0_1 00:23:32.380 valid_lft forever preferred_lft forever 00:23:32.380 11:47:03 -- nvmf/common.sh@411 -- # return 0 00:23:32.380 11:47:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:32.380 11:47:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:32.380 11:47:03 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:23:32.380 11:47:03 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:23:32.380 11:47:03 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:32.380 11:47:03 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:32.380 11:47:03 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:32.380 11:47:03 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:32.380 11:47:03 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:32.380 11:47:03 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:32.380 11:47:03 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:32.380 11:47:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.380 11:47:03 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:32.380 11:47:03 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:32.380 11:47:03 -- nvmf/common.sh@105 -- # continue 2 00:23:32.380 11:47:03 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:32.380 11:47:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.380 11:47:03 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:32.380 11:47:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.380 11:47:03 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:32.380 11:47:03 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:32.380 11:47:03 -- nvmf/common.sh@105 -- # continue 2 00:23:32.380 11:47:03 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:32.380 11:47:03 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:32.380 11:47:03 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:32.380 11:47:03 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:32.380 11:47:03 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:32.380 11:47:03 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:32.380 11:47:03 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:32.380 11:47:03 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:32.380 11:47:03 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:32.380 11:47:03 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:32.380 11:47:03 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:32.380 11:47:03 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:32.380 11:47:03 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:23:32.380 192.168.100.9' 00:23:32.380 11:47:03 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:32.380 192.168.100.9' 00:23:32.380 11:47:03 -- nvmf/common.sh@446 -- # head -n 1 00:23:32.380 11:47:03 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:32.380 11:47:03 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:23:32.380 192.168.100.9' 00:23:32.380 11:47:03 -- nvmf/common.sh@447 -- # tail -n +2 00:23:32.380 11:47:03 -- nvmf/common.sh@447 -- # head -n 1 00:23:32.380 11:47:03 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:32.380 11:47:03 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:23:32.380 11:47:03 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:32.380 11:47:03 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:23:32.380 11:47:03 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:23:32.380 11:47:03 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:23:32.380 11:47:03 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:23:32.380 11:47:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:32.380 11:47:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:32.380 11:47:03 -- common/autotest_common.sh@10 -- # set +x 00:23:32.639 ************************************ 00:23:32.639 START TEST nvmf_target_disconnect_tc1 00:23:32.639 ************************************ 00:23:32.639 11:47:03 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:23:32.639 11:47:03 -- host/target_disconnect.sh@32 -- # set +e 00:23:32.639 11:47:03 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:32.639 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.639 [2024-05-15 11:47:03.286181] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:32.639 [2024-05-15 11:47:03.286240] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:32.639 [2024-05-15 11:47:03.286254] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:23:33.574 [2024-05-15 11:47:04.290126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:33.574 [2024-05-15 11:47:04.290191] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:33.574 [2024-05-15 11:47:04.290226] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:23:33.574 [2024-05-15 11:47:04.290302] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:33.574 [2024-05-15 11:47:04.290331] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:23:33.574 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:23:33.574 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:23:33.574 Initializing NVMe Controllers 00:23:33.574 11:47:04 -- host/target_disconnect.sh@33 -- # trap - ERR 00:23:33.574 11:47:04 -- host/target_disconnect.sh@33 -- # print_backtrace 00:23:33.574 11:47:04 -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:23:33.574 11:47:04 -- common/autotest_common.sh@1149 -- # return 0 00:23:33.574 11:47:04 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:23:33.574 11:47:04 -- host/target_disconnect.sh@41 -- # set -e 00:23:33.574 00:23:33.574 real 0m1.132s 00:23:33.574 user 0m0.870s 00:23:33.574 sys 0m0.249s 00:23:33.574 11:47:04 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:33.574 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:23:33.574 ************************************ 00:23:33.574 END TEST nvmf_target_disconnect_tc1 00:23:33.574 ************************************ 00:23:33.833 11:47:04 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:23:33.833 11:47:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:33.833 11:47:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:33.833 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:23:33.833 ************************************ 00:23:33.833 START TEST nvmf_target_disconnect_tc2 00:23:33.833 ************************************ 00:23:33.833 11:47:04 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:23:33.833 11:47:04 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:23:33.833 11:47:04 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:23:33.833 11:47:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:33.833 11:47:04 -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:33.833 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:23:33.833 11:47:04 -- nvmf/common.sh@470 -- # nvmfpid=3127171 00:23:33.833 11:47:04 -- nvmf/common.sh@471 -- # waitforlisten 3127171 00:23:33.833 11:47:04 -- common/autotest_common.sh@827 -- # '[' -z 3127171 ']' 00:23:33.833 11:47:04 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:23:33.833 11:47:04 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.833 11:47:04 -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:33.833 11:47:04 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.833 11:47:04 -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:33.833 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:23:33.833 [2024-05-15 11:47:04.411568] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:23:33.833 [2024-05-15 11:47:04.411617] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.833 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.833 [2024-05-15 11:47:04.497427] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:33.833 [2024-05-15 11:47:04.584224] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.833 [2024-05-15 11:47:04.584264] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.833 [2024-05-15 11:47:04.584274] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.833 [2024-05-15 11:47:04.584283] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.833 [2024-05-15 11:47:04.584290] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.833 [2024-05-15 11:47:04.584406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:33.833 [2024-05-15 11:47:04.584969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:33.833 [2024-05-15 11:47:04.584996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:33.833 [2024-05-15 11:47:04.584998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:23:34.769 11:47:05 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:34.769 11:47:05 -- common/autotest_common.sh@860 -- # return 0 00:23:34.769 11:47:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:34.769 11:47:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:34.769 11:47:05 -- common/autotest_common.sh@10 -- # set +x 00:23:34.769 11:47:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.769 11:47:05 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:34.769 11:47:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.769 11:47:05 -- common/autotest_common.sh@10 -- # set +x 00:23:34.769 Malloc0 00:23:34.769 11:47:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.769 11:47:05 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:34.769 11:47:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.769 11:47:05 -- common/autotest_common.sh@10 -- # set +x 00:23:34.769 [2024-05-15 11:47:05.328516] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10caff0/0x10d6c80) succeed. 00:23:34.769 [2024-05-15 11:47:05.339407] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10cc630/0x1118310) succeed. 00:23:34.769 11:47:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.769 11:47:05 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:34.769 11:47:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.769 11:47:05 -- common/autotest_common.sh@10 -- # set +x 00:23:34.769 11:47:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.769 11:47:05 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:34.769 11:47:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.769 11:47:05 -- common/autotest_common.sh@10 -- # set +x 00:23:34.769 11:47:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.769 11:47:05 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:34.769 11:47:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.769 11:47:05 -- common/autotest_common.sh@10 -- # set +x 00:23:34.769 [2024-05-15 11:47:05.495491] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:34.769 [2024-05-15 11:47:05.495854] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:34.769 11:47:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.769 11:47:05 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:34.769 11:47:05 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.769 11:47:05 -- common/autotest_common.sh@10 -- # set +x 00:23:34.769 11:47:05 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.769 11:47:05 -- host/target_disconnect.sh@50 -- # reconnectpid=3127368 00:23:34.769 11:47:05 -- host/target_disconnect.sh@52 -- # sleep 2 00:23:34.769 11:47:05 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:35.026 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.927 11:47:07 -- host/target_disconnect.sh@53 -- # kill -9 3127171 00:23:36.927 11:47:07 -- host/target_disconnect.sh@55 -- # sleep 2 00:23:38.305 Read completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Read completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Write completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Write completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Read completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Write completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Read completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Write completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Write completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Write completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Read completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Write completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Read completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Write completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Write completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Read completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Read completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Write completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Write completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Read completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.305 Write completed with error (sct=0, sc=8) 00:23:38.305 starting I/O failed 00:23:38.306 Read completed with error (sct=0, sc=8) 00:23:38.306 starting I/O failed 00:23:38.306 Write completed with error (sct=0, sc=8) 00:23:38.306 starting I/O failed 00:23:38.306 Write completed with error (sct=0, sc=8) 00:23:38.306 starting I/O failed 00:23:38.306 Write completed with error (sct=0, sc=8) 00:23:38.306 starting I/O failed 00:23:38.306 Write completed with error (sct=0, sc=8) 00:23:38.306 starting I/O failed 00:23:38.306 Write completed with error (sct=0, sc=8) 00:23:38.306 starting I/O failed 00:23:38.306 Read completed with error (sct=0, sc=8) 00:23:38.306 starting I/O failed 00:23:38.306 Read completed with error (sct=0, sc=8) 00:23:38.306 starting I/O failed 00:23:38.306 Read completed with error (sct=0, sc=8) 00:23:38.306 starting I/O failed 00:23:38.306 Write completed with error (sct=0, sc=8) 00:23:38.306 starting I/O failed 00:23:38.306 Read completed with error (sct=0, sc=8) 00:23:38.306 starting I/O failed 00:23:38.306 [2024-05-15 11:47:08.692127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:38.873 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3127171 Killed "${NVMF_APP[@]}" "$@" 00:23:38.873 11:47:09 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:23:38.873 11:47:09 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:23:38.873 11:47:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:38.873 11:47:09 -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:38.873 11:47:09 -- common/autotest_common.sh@10 -- # set +x 00:23:38.873 11:47:09 -- nvmf/common.sh@470 -- # nvmfpid=3127894 00:23:38.873 11:47:09 -- nvmf/common.sh@471 -- # waitforlisten 3127894 00:23:38.873 11:47:09 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:23:38.873 11:47:09 -- common/autotest_common.sh@827 -- # '[' -z 3127894 ']' 00:23:38.873 11:47:09 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.873 11:47:09 -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:38.873 11:47:09 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.873 11:47:09 -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:38.873 11:47:09 -- common/autotest_common.sh@10 -- # set +x 00:23:38.873 [2024-05-15 11:47:09.574607] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:23:38.873 [2024-05-15 11:47:09.574665] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.873 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.132 [2024-05-15 11:47:09.666447] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.132 Read completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Write completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Write completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Write completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Read completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Write completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Read completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Read completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Write completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Write completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Write completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Write completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Write completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Write completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Write completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Read completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Read completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Write completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Read completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Read completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Write completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Read completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Read completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Read completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Read completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Write completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Write completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Read completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Read completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Read completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Write completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 Read completed with error (sct=0, sc=8) 00:23:39.132 starting I/O failed 00:23:39.132 [2024-05-15 11:47:09.697185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:39.132 [2024-05-15 11:47:09.755467] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.132 [2024-05-15 11:47:09.755508] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.132 [2024-05-15 11:47:09.755518] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.132 [2024-05-15 11:47:09.755542] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.132 [2024-05-15 11:47:09.755549] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.132 [2024-05-15 11:47:09.755669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:39.132 [2024-05-15 11:47:09.755771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:39.132 [2024-05-15 11:47:09.755789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:23:39.132 [2024-05-15 11:47:09.755794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:39.700 11:47:10 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:39.700 11:47:10 -- common/autotest_common.sh@860 -- # return 0 00:23:39.700 11:47:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:39.700 11:47:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.700 11:47:10 -- common/autotest_common.sh@10 -- # set +x 00:23:39.700 11:47:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.700 11:47:10 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:39.700 11:47:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.700 11:47:10 -- common/autotest_common.sh@10 -- # set +x 00:23:39.700 Malloc0 00:23:39.700 11:47:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.700 11:47:10 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:39.700 11:47:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.700 11:47:10 -- common/autotest_common.sh@10 -- # set +x 00:23:39.959 [2024-05-15 11:47:10.486160] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15beff0/0x15cac80) succeed. 00:23:39.959 [2024-05-15 11:47:10.497131] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15c0630/0x160c310) succeed. 00:23:39.959 11:47:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.959 11:47:10 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.959 11:47:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.959 11:47:10 -- common/autotest_common.sh@10 -- # set +x 00:23:39.959 11:47:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.959 11:47:10 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.959 11:47:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.959 11:47:10 -- common/autotest_common.sh@10 -- # set +x 00:23:39.959 11:47:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.959 11:47:10 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:39.959 11:47:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.959 11:47:10 -- common/autotest_common.sh@10 -- # set +x 00:23:39.959 [2024-05-15 11:47:10.650184] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:39.959 [2024-05-15 11:47:10.650515] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:39.959 11:47:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.959 11:47:10 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:39.959 11:47:10 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.959 11:47:10 -- common/autotest_common.sh@10 -- # set +x 00:23:39.959 11:47:10 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.959 11:47:10 -- host/target_disconnect.sh@58 -- # wait 3127368 00:23:39.959 Write completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Write completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Write completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Write completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Write completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Write completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Write completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Write completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Write completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Write completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Write completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Write completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Write completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Write completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 Read completed with error (sct=0, sc=8) 00:23:39.959 starting I/O failed 00:23:39.959 [2024-05-15 11:47:10.702250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:39.959 [2024-05-15 11:47:10.714520] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:39.959 [2024-05-15 11:47:10.714576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:39.959 [2024-05-15 11:47:10.714597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:39.959 [2024-05-15 11:47:10.714607] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:39.959 [2024-05-15 11:47:10.714617] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.219 [2024-05-15 11:47:10.724737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.219 qpair failed and we were unable to recover it. 00:23:40.219 [2024-05-15 11:47:10.734381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.219 [2024-05-15 11:47:10.734430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.219 [2024-05-15 11:47:10.734448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.219 [2024-05-15 11:47:10.734458] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.219 [2024-05-15 11:47:10.734467] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.219 [2024-05-15 11:47:10.744639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.219 qpair failed and we were unable to recover it. 00:23:40.219 [2024-05-15 11:47:10.754530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.219 [2024-05-15 11:47:10.754566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.219 [2024-05-15 11:47:10.754584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.219 [2024-05-15 11:47:10.754593] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.219 [2024-05-15 11:47:10.754602] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.219 [2024-05-15 11:47:10.764758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.219 qpair failed and we were unable to recover it. 00:23:40.219 [2024-05-15 11:47:10.774553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.219 [2024-05-15 11:47:10.774596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.219 [2024-05-15 11:47:10.774614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.219 [2024-05-15 11:47:10.774623] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.219 [2024-05-15 11:47:10.774632] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.219 [2024-05-15 11:47:10.784884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.219 qpair failed and we were unable to recover it. 00:23:40.219 [2024-05-15 11:47:10.794646] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.219 [2024-05-15 11:47:10.794693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.220 [2024-05-15 11:47:10.794710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.220 [2024-05-15 11:47:10.794720] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.220 [2024-05-15 11:47:10.794729] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.220 [2024-05-15 11:47:10.805027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.220 qpair failed and we were unable to recover it. 00:23:40.220 [2024-05-15 11:47:10.814586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.220 [2024-05-15 11:47:10.814623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.220 [2024-05-15 11:47:10.814643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.220 [2024-05-15 11:47:10.814653] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.220 [2024-05-15 11:47:10.814661] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.220 [2024-05-15 11:47:10.824984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.220 qpair failed and we were unable to recover it. 00:23:40.220 [2024-05-15 11:47:10.834694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.220 [2024-05-15 11:47:10.834732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.220 [2024-05-15 11:47:10.834750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.220 [2024-05-15 11:47:10.834760] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.220 [2024-05-15 11:47:10.834768] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.220 [2024-05-15 11:47:10.845078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.220 qpair failed and we were unable to recover it. 00:23:40.220 [2024-05-15 11:47:10.854711] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.220 [2024-05-15 11:47:10.854752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.220 [2024-05-15 11:47:10.854768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.220 [2024-05-15 11:47:10.854778] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.220 [2024-05-15 11:47:10.854786] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.220 [2024-05-15 11:47:10.864996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.220 qpair failed and we were unable to recover it. 00:23:40.220 [2024-05-15 11:47:10.874828] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.220 [2024-05-15 11:47:10.874866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.220 [2024-05-15 11:47:10.874883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.220 [2024-05-15 11:47:10.874893] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.220 [2024-05-15 11:47:10.874901] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.220 [2024-05-15 11:47:10.885077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.220 qpair failed and we were unable to recover it. 00:23:40.220 [2024-05-15 11:47:10.894937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.220 [2024-05-15 11:47:10.894976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.220 [2024-05-15 11:47:10.894992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.220 [2024-05-15 11:47:10.895002] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.220 [2024-05-15 11:47:10.895014] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.220 [2024-05-15 11:47:10.905245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.220 qpair failed and we were unable to recover it. 00:23:40.220 [2024-05-15 11:47:10.914886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.220 [2024-05-15 11:47:10.914922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.220 [2024-05-15 11:47:10.914938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.220 [2024-05-15 11:47:10.914947] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.220 [2024-05-15 11:47:10.914956] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.220 [2024-05-15 11:47:10.925336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.220 qpair failed and we were unable to recover it. 00:23:40.220 [2024-05-15 11:47:10.935007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.220 [2024-05-15 11:47:10.935047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.220 [2024-05-15 11:47:10.935069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.220 [2024-05-15 11:47:10.935079] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.220 [2024-05-15 11:47:10.935088] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.220 [2024-05-15 11:47:10.945363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.220 qpair failed and we were unable to recover it. 00:23:40.220 [2024-05-15 11:47:10.955019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.220 [2024-05-15 11:47:10.955064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.220 [2024-05-15 11:47:10.955081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.220 [2024-05-15 11:47:10.955091] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.220 [2024-05-15 11:47:10.955100] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.220 [2024-05-15 11:47:10.965407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.220 qpair failed and we were unable to recover it. 00:23:40.220 [2024-05-15 11:47:10.975089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.220 [2024-05-15 11:47:10.975128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.220 [2024-05-15 11:47:10.975145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.220 [2024-05-15 11:47:10.975155] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.220 [2024-05-15 11:47:10.975163] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.480 [2024-05-15 11:47:10.985371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.480 qpair failed and we were unable to recover it. 00:23:40.480 [2024-05-15 11:47:10.995210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.480 [2024-05-15 11:47:10.995251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.480 [2024-05-15 11:47:10.995267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.480 [2024-05-15 11:47:10.995277] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.480 [2024-05-15 11:47:10.995285] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.480 [2024-05-15 11:47:11.005544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.480 qpair failed and we were unable to recover it. 00:23:40.480 [2024-05-15 11:47:11.015277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.480 [2024-05-15 11:47:11.015316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.480 [2024-05-15 11:47:11.015333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.480 [2024-05-15 11:47:11.015343] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.480 [2024-05-15 11:47:11.015351] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.480 [2024-05-15 11:47:11.025617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.480 qpair failed and we were unable to recover it. 00:23:40.480 [2024-05-15 11:47:11.035233] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.480 [2024-05-15 11:47:11.035274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.480 [2024-05-15 11:47:11.035291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.480 [2024-05-15 11:47:11.035300] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.480 [2024-05-15 11:47:11.035309] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.480 [2024-05-15 11:47:11.045628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.480 qpair failed and we were unable to recover it. 00:23:40.480 [2024-05-15 11:47:11.055210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.480 [2024-05-15 11:47:11.055247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.480 [2024-05-15 11:47:11.055264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.480 [2024-05-15 11:47:11.055273] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.480 [2024-05-15 11:47:11.055282] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.480 [2024-05-15 11:47:11.065628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.480 qpair failed and we were unable to recover it. 00:23:40.480 [2024-05-15 11:47:11.075392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.480 [2024-05-15 11:47:11.075434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.480 [2024-05-15 11:47:11.075451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.480 [2024-05-15 11:47:11.075464] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.480 [2024-05-15 11:47:11.075473] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.480 [2024-05-15 11:47:11.085778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.480 qpair failed and we were unable to recover it. 00:23:40.480 [2024-05-15 11:47:11.095467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.480 [2024-05-15 11:47:11.095505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.480 [2024-05-15 11:47:11.095521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.480 [2024-05-15 11:47:11.095530] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.480 [2024-05-15 11:47:11.095539] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.480 [2024-05-15 11:47:11.105756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.480 qpair failed and we were unable to recover it. 00:23:40.480 [2024-05-15 11:47:11.115530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.480 [2024-05-15 11:47:11.115570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.480 [2024-05-15 11:47:11.115586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.480 [2024-05-15 11:47:11.115596] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.480 [2024-05-15 11:47:11.115604] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.480 [2024-05-15 11:47:11.125801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.480 qpair failed and we were unable to recover it. 00:23:40.480 [2024-05-15 11:47:11.135559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.480 [2024-05-15 11:47:11.135597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.480 [2024-05-15 11:47:11.135614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.480 [2024-05-15 11:47:11.135623] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.480 [2024-05-15 11:47:11.135632] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.481 [2024-05-15 11:47:11.145907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.481 qpair failed and we were unable to recover it. 00:23:40.481 [2024-05-15 11:47:11.155585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.481 [2024-05-15 11:47:11.155627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.481 [2024-05-15 11:47:11.155644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.481 [2024-05-15 11:47:11.155654] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.481 [2024-05-15 11:47:11.155663] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.481 [2024-05-15 11:47:11.165929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.481 qpair failed and we were unable to recover it. 00:23:40.481 [2024-05-15 11:47:11.175591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.481 [2024-05-15 11:47:11.175634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.481 [2024-05-15 11:47:11.175650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.481 [2024-05-15 11:47:11.175660] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.481 [2024-05-15 11:47:11.175669] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.481 [2024-05-15 11:47:11.185852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.481 qpair failed and we were unable to recover it. 00:23:40.481 [2024-05-15 11:47:11.195700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.481 [2024-05-15 11:47:11.195747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.481 [2024-05-15 11:47:11.195762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.481 [2024-05-15 11:47:11.195772] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.481 [2024-05-15 11:47:11.195780] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.481 [2024-05-15 11:47:11.205958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.481 qpair failed and we were unable to recover it. 00:23:40.481 [2024-05-15 11:47:11.215771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.481 [2024-05-15 11:47:11.215812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.481 [2024-05-15 11:47:11.215828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.481 [2024-05-15 11:47:11.215837] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.481 [2024-05-15 11:47:11.215846] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.481 [2024-05-15 11:47:11.226065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.481 qpair failed and we were unable to recover it. 00:23:40.481 [2024-05-15 11:47:11.235761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.481 [2024-05-15 11:47:11.235804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.481 [2024-05-15 11:47:11.235820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.481 [2024-05-15 11:47:11.235829] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.481 [2024-05-15 11:47:11.235838] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.741 [2024-05-15 11:47:11.246068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.741 qpair failed and we were unable to recover it. 00:23:40.741 [2024-05-15 11:47:11.255802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.741 [2024-05-15 11:47:11.255839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.741 [2024-05-15 11:47:11.255867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.741 [2024-05-15 11:47:11.255876] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.741 [2024-05-15 11:47:11.255885] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.741 [2024-05-15 11:47:11.266292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.741 qpair failed and we were unable to recover it. 00:23:40.741 [2024-05-15 11:47:11.275985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.741 [2024-05-15 11:47:11.276031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.741 [2024-05-15 11:47:11.276048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.741 [2024-05-15 11:47:11.276063] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.741 [2024-05-15 11:47:11.276072] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.741 [2024-05-15 11:47:11.286284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.741 qpair failed and we were unable to recover it. 00:23:40.741 [2024-05-15 11:47:11.296043] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.741 [2024-05-15 11:47:11.296086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.741 [2024-05-15 11:47:11.296103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.741 [2024-05-15 11:47:11.296112] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.741 [2024-05-15 11:47:11.296121] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.741 [2024-05-15 11:47:11.306351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.741 qpair failed and we were unable to recover it. 00:23:40.741 [2024-05-15 11:47:11.316016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.741 [2024-05-15 11:47:11.316050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.741 [2024-05-15 11:47:11.316076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.741 [2024-05-15 11:47:11.316086] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.741 [2024-05-15 11:47:11.316095] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.741 [2024-05-15 11:47:11.326510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.741 qpair failed and we were unable to recover it. 00:23:40.741 [2024-05-15 11:47:11.336130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.741 [2024-05-15 11:47:11.336170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.741 [2024-05-15 11:47:11.336186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.741 [2024-05-15 11:47:11.336195] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.741 [2024-05-15 11:47:11.336207] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.741 [2024-05-15 11:47:11.346352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.741 qpair failed and we were unable to recover it. 00:23:40.741 [2024-05-15 11:47:11.356218] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.741 [2024-05-15 11:47:11.356260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.741 [2024-05-15 11:47:11.356277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.741 [2024-05-15 11:47:11.356286] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.741 [2024-05-15 11:47:11.356295] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.741 [2024-05-15 11:47:11.366378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.741 qpair failed and we were unable to recover it. 00:23:40.741 [2024-05-15 11:47:11.376212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.741 [2024-05-15 11:47:11.376250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.741 [2024-05-15 11:47:11.376267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.741 [2024-05-15 11:47:11.376276] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.741 [2024-05-15 11:47:11.376285] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.741 [2024-05-15 11:47:11.386678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.741 qpair failed and we were unable to recover it. 00:23:40.741 [2024-05-15 11:47:11.396378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.741 [2024-05-15 11:47:11.396413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.741 [2024-05-15 11:47:11.396429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.741 [2024-05-15 11:47:11.396438] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.741 [2024-05-15 11:47:11.396447] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.741 [2024-05-15 11:47:11.406789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.741 qpair failed and we were unable to recover it. 00:23:40.741 [2024-05-15 11:47:11.416299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.741 [2024-05-15 11:47:11.416339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.741 [2024-05-15 11:47:11.416355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.741 [2024-05-15 11:47:11.416364] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.741 [2024-05-15 11:47:11.416373] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.741 [2024-05-15 11:47:11.426769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.741 qpair failed and we were unable to recover it. 00:23:40.741 [2024-05-15 11:47:11.436472] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.741 [2024-05-15 11:47:11.436510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.741 [2024-05-15 11:47:11.436527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.741 [2024-05-15 11:47:11.436536] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.741 [2024-05-15 11:47:11.436545] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.741 [2024-05-15 11:47:11.446793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.741 qpair failed and we were unable to recover it. 00:23:40.741 [2024-05-15 11:47:11.456484] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.741 [2024-05-15 11:47:11.456518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.741 [2024-05-15 11:47:11.456534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.741 [2024-05-15 11:47:11.456544] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.741 [2024-05-15 11:47:11.456552] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.741 [2024-05-15 11:47:11.466861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.741 qpair failed and we were unable to recover it. 00:23:40.741 [2024-05-15 11:47:11.476545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.741 [2024-05-15 11:47:11.476585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.741 [2024-05-15 11:47:11.476602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.741 [2024-05-15 11:47:11.476611] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.741 [2024-05-15 11:47:11.476620] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:40.742 [2024-05-15 11:47:11.486956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.742 qpair failed and we were unable to recover it. 00:23:40.742 [2024-05-15 11:47:11.496587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:40.742 [2024-05-15 11:47:11.496628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:40.742 [2024-05-15 11:47:11.496645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:40.742 [2024-05-15 11:47:11.496654] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:40.742 [2024-05-15 11:47:11.496663] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.001 [2024-05-15 11:47:11.506776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.001 qpair failed and we were unable to recover it. 00:23:41.001 [2024-05-15 11:47:11.516577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.001 [2024-05-15 11:47:11.516616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.001 [2024-05-15 11:47:11.516632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.001 [2024-05-15 11:47:11.516646] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.001 [2024-05-15 11:47:11.516655] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.001 [2024-05-15 11:47:11.527091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.001 qpair failed and we were unable to recover it. 00:23:41.001 [2024-05-15 11:47:11.536679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.001 [2024-05-15 11:47:11.536721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.001 [2024-05-15 11:47:11.536737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.001 [2024-05-15 11:47:11.536746] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.001 [2024-05-15 11:47:11.536755] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.001 [2024-05-15 11:47:11.547196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.001 qpair failed and we were unable to recover it. 00:23:41.001 [2024-05-15 11:47:11.556636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.001 [2024-05-15 11:47:11.556671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.001 [2024-05-15 11:47:11.556687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.001 [2024-05-15 11:47:11.556697] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.001 [2024-05-15 11:47:11.556705] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.001 [2024-05-15 11:47:11.567209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.001 qpair failed and we were unable to recover it. 00:23:41.001 [2024-05-15 11:47:11.576813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.001 [2024-05-15 11:47:11.576852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.001 [2024-05-15 11:47:11.576869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.001 [2024-05-15 11:47:11.576878] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.001 [2024-05-15 11:47:11.576887] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.001 [2024-05-15 11:47:11.587137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.001 qpair failed and we were unable to recover it. 00:23:41.001 [2024-05-15 11:47:11.596895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.001 [2024-05-15 11:47:11.596940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.001 [2024-05-15 11:47:11.596956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.001 [2024-05-15 11:47:11.596965] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.001 [2024-05-15 11:47:11.596974] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.001 [2024-05-15 11:47:11.607204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.001 qpair failed and we were unable to recover it. 00:23:41.001 [2024-05-15 11:47:11.616909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.001 [2024-05-15 11:47:11.616948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.001 [2024-05-15 11:47:11.616964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.001 [2024-05-15 11:47:11.616974] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.001 [2024-05-15 11:47:11.616982] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.001 [2024-05-15 11:47:11.627335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.001 qpair failed and we were unable to recover it. 00:23:41.001 [2024-05-15 11:47:11.637026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.001 [2024-05-15 11:47:11.637071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.001 [2024-05-15 11:47:11.637087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.001 [2024-05-15 11:47:11.637096] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.002 [2024-05-15 11:47:11.637105] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.002 [2024-05-15 11:47:11.647354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.002 qpair failed and we were unable to recover it. 00:23:41.002 [2024-05-15 11:47:11.657066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.002 [2024-05-15 11:47:11.657105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.002 [2024-05-15 11:47:11.657121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.002 [2024-05-15 11:47:11.657131] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.002 [2024-05-15 11:47:11.657140] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.002 [2024-05-15 11:47:11.667522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.002 qpair failed and we were unable to recover it. 00:23:41.002 [2024-05-15 11:47:11.677240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.002 [2024-05-15 11:47:11.677280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.002 [2024-05-15 11:47:11.677297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.002 [2024-05-15 11:47:11.677306] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.002 [2024-05-15 11:47:11.677315] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.002 [2024-05-15 11:47:11.687586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.002 qpair failed and we were unable to recover it. 00:23:41.002 [2024-05-15 11:47:11.697186] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.002 [2024-05-15 11:47:11.697226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.002 [2024-05-15 11:47:11.697246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.002 [2024-05-15 11:47:11.697255] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.002 [2024-05-15 11:47:11.697264] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.002 [2024-05-15 11:47:11.707484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.002 qpair failed and we were unable to recover it. 00:23:41.002 [2024-05-15 11:47:11.717295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.002 [2024-05-15 11:47:11.717336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.002 [2024-05-15 11:47:11.717351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.002 [2024-05-15 11:47:11.717361] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.002 [2024-05-15 11:47:11.717370] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.002 [2024-05-15 11:47:11.727726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.002 qpair failed and we were unable to recover it. 00:23:41.002 [2024-05-15 11:47:11.737272] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.002 [2024-05-15 11:47:11.737310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.002 [2024-05-15 11:47:11.737326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.002 [2024-05-15 11:47:11.737335] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.002 [2024-05-15 11:47:11.737343] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.002 [2024-05-15 11:47:11.747725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.002 qpair failed and we were unable to recover it. 00:23:41.002 [2024-05-15 11:47:11.757487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.002 [2024-05-15 11:47:11.757532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.002 [2024-05-15 11:47:11.757548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.002 [2024-05-15 11:47:11.757558] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.002 [2024-05-15 11:47:11.757566] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.261 [2024-05-15 11:47:11.767701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.261 qpair failed and we were unable to recover it. 00:23:41.261 [2024-05-15 11:47:11.777449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.261 [2024-05-15 11:47:11.777488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.261 [2024-05-15 11:47:11.777505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.261 [2024-05-15 11:47:11.777514] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.261 [2024-05-15 11:47:11.777526] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.261 [2024-05-15 11:47:11.787794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.261 qpair failed and we were unable to recover it. 00:23:41.261 [2024-05-15 11:47:11.797390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.261 [2024-05-15 11:47:11.797424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.261 [2024-05-15 11:47:11.797440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.261 [2024-05-15 11:47:11.797450] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.261 [2024-05-15 11:47:11.797458] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.261 [2024-05-15 11:47:11.807820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.261 qpair failed and we were unable to recover it. 00:23:41.261 [2024-05-15 11:47:11.817634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.261 [2024-05-15 11:47:11.817673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.261 [2024-05-15 11:47:11.817689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.261 [2024-05-15 11:47:11.817698] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.261 [2024-05-15 11:47:11.817707] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.261 [2024-05-15 11:47:11.827899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.261 qpair failed and we were unable to recover it. 00:23:41.261 [2024-05-15 11:47:11.837633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.262 [2024-05-15 11:47:11.837671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.262 [2024-05-15 11:47:11.837687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.262 [2024-05-15 11:47:11.837697] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.262 [2024-05-15 11:47:11.837705] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.262 [2024-05-15 11:47:11.847937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.262 qpair failed and we were unable to recover it. 00:23:41.262 [2024-05-15 11:47:11.857660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.262 [2024-05-15 11:47:11.857701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.262 [2024-05-15 11:47:11.857718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.262 [2024-05-15 11:47:11.857727] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.262 [2024-05-15 11:47:11.857736] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.262 [2024-05-15 11:47:11.868183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.262 qpair failed and we were unable to recover it. 00:23:41.262 [2024-05-15 11:47:11.877766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.262 [2024-05-15 11:47:11.877801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.262 [2024-05-15 11:47:11.877817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.262 [2024-05-15 11:47:11.877827] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.262 [2024-05-15 11:47:11.877835] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.262 [2024-05-15 11:47:11.888107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.262 qpair failed and we were unable to recover it. 00:23:41.262 [2024-05-15 11:47:11.897848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.262 [2024-05-15 11:47:11.897890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.262 [2024-05-15 11:47:11.897906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.262 [2024-05-15 11:47:11.897915] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.262 [2024-05-15 11:47:11.897924] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.262 [2024-05-15 11:47:11.908135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.262 qpair failed and we were unable to recover it. 00:23:41.262 [2024-05-15 11:47:11.917890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.262 [2024-05-15 11:47:11.917934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.262 [2024-05-15 11:47:11.917950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.262 [2024-05-15 11:47:11.917959] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.262 [2024-05-15 11:47:11.917968] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.262 [2024-05-15 11:47:11.928086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.262 qpair failed and we were unable to recover it. 00:23:41.262 [2024-05-15 11:47:11.937958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.262 [2024-05-15 11:47:11.937998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.262 [2024-05-15 11:47:11.938014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.262 [2024-05-15 11:47:11.938023] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.262 [2024-05-15 11:47:11.938033] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.262 [2024-05-15 11:47:11.948200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.262 qpair failed and we were unable to recover it. 00:23:41.262 [2024-05-15 11:47:11.958014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.262 [2024-05-15 11:47:11.958048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.262 [2024-05-15 11:47:11.958069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.262 [2024-05-15 11:47:11.958082] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.262 [2024-05-15 11:47:11.958091] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.262 [2024-05-15 11:47:11.968142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.262 qpair failed and we were unable to recover it. 00:23:41.262 [2024-05-15 11:47:11.978107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.262 [2024-05-15 11:47:11.978149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.262 [2024-05-15 11:47:11.978165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.262 [2024-05-15 11:47:11.978174] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.262 [2024-05-15 11:47:11.978183] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.262 [2024-05-15 11:47:11.988543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.262 qpair failed and we were unable to recover it. 00:23:41.262 [2024-05-15 11:47:11.998143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.262 [2024-05-15 11:47:11.998189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.262 [2024-05-15 11:47:11.998206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.262 [2024-05-15 11:47:11.998215] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.262 [2024-05-15 11:47:11.998224] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.262 [2024-05-15 11:47:12.008698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.262 qpair failed and we were unable to recover it. 00:23:41.262 [2024-05-15 11:47:12.018240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.262 [2024-05-15 11:47:12.018282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.262 [2024-05-15 11:47:12.018299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.262 [2024-05-15 11:47:12.018309] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.262 [2024-05-15 11:47:12.018318] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.522 [2024-05-15 11:47:12.028339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.522 qpair failed and we were unable to recover it. 00:23:41.522 [2024-05-15 11:47:12.038173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.522 [2024-05-15 11:47:12.038213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.522 [2024-05-15 11:47:12.038229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.522 [2024-05-15 11:47:12.038239] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.522 [2024-05-15 11:47:12.038248] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.522 [2024-05-15 11:47:12.048565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.522 qpair failed and we were unable to recover it. 00:23:41.522 [2024-05-15 11:47:12.058294] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.522 [2024-05-15 11:47:12.058336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.522 [2024-05-15 11:47:12.058353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.522 [2024-05-15 11:47:12.058363] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.522 [2024-05-15 11:47:12.058371] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.522 [2024-05-15 11:47:12.068598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.522 qpair failed and we were unable to recover it. 00:23:41.522 [2024-05-15 11:47:12.078333] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.522 [2024-05-15 11:47:12.078374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.522 [2024-05-15 11:47:12.078392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.522 [2024-05-15 11:47:12.078402] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.522 [2024-05-15 11:47:12.078411] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.522 [2024-05-15 11:47:12.088714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.522 qpair failed and we were unable to recover it. 00:23:41.522 [2024-05-15 11:47:12.098346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.522 [2024-05-15 11:47:12.098383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.522 [2024-05-15 11:47:12.098400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.522 [2024-05-15 11:47:12.098409] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.522 [2024-05-15 11:47:12.098418] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.522 [2024-05-15 11:47:12.108936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.522 qpair failed and we were unable to recover it. 00:23:41.522 [2024-05-15 11:47:12.118532] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.522 [2024-05-15 11:47:12.118571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.522 [2024-05-15 11:47:12.118587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.522 [2024-05-15 11:47:12.118597] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.522 [2024-05-15 11:47:12.118606] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.522 [2024-05-15 11:47:12.128796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.522 qpair failed and we were unable to recover it. 00:23:41.522 [2024-05-15 11:47:12.138520] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.522 [2024-05-15 11:47:12.138561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.522 [2024-05-15 11:47:12.138581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.522 [2024-05-15 11:47:12.138590] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.522 [2024-05-15 11:47:12.138599] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.522 [2024-05-15 11:47:12.148906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.522 qpair failed and we were unable to recover it. 00:23:41.522 [2024-05-15 11:47:12.158471] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.522 [2024-05-15 11:47:12.158510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.522 [2024-05-15 11:47:12.158527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.522 [2024-05-15 11:47:12.158537] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.522 [2024-05-15 11:47:12.158546] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.522 [2024-05-15 11:47:12.168881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.522 qpair failed and we were unable to recover it. 00:23:41.522 [2024-05-15 11:47:12.178554] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.522 [2024-05-15 11:47:12.178592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.522 [2024-05-15 11:47:12.178608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.522 [2024-05-15 11:47:12.178618] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.522 [2024-05-15 11:47:12.178626] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.522 [2024-05-15 11:47:12.189007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.522 qpair failed and we were unable to recover it. 00:23:41.522 [2024-05-15 11:47:12.198578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.522 [2024-05-15 11:47:12.198617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.522 [2024-05-15 11:47:12.198633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.522 [2024-05-15 11:47:12.198643] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.523 [2024-05-15 11:47:12.198651] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.523 [2024-05-15 11:47:12.209089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.523 qpair failed and we were unable to recover it. 00:23:41.523 [2024-05-15 11:47:12.218649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.523 [2024-05-15 11:47:12.218688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.523 [2024-05-15 11:47:12.218704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.523 [2024-05-15 11:47:12.218713] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.523 [2024-05-15 11:47:12.218725] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.523 [2024-05-15 11:47:12.229219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.523 qpair failed and we were unable to recover it. 00:23:41.523 [2024-05-15 11:47:12.238813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.523 [2024-05-15 11:47:12.238858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.523 [2024-05-15 11:47:12.238875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.523 [2024-05-15 11:47:12.238884] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.523 [2024-05-15 11:47:12.238893] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.523 [2024-05-15 11:47:12.248985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.523 qpair failed and we were unable to recover it. 00:23:41.523 [2024-05-15 11:47:12.258816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.523 [2024-05-15 11:47:12.258856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.523 [2024-05-15 11:47:12.258873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.523 [2024-05-15 11:47:12.258882] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.523 [2024-05-15 11:47:12.258890] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.523 [2024-05-15 11:47:12.269319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.523 qpair failed and we were unable to recover it. 00:23:41.523 [2024-05-15 11:47:12.278827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.523 [2024-05-15 11:47:12.278868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.523 [2024-05-15 11:47:12.278885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.523 [2024-05-15 11:47:12.278894] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.523 [2024-05-15 11:47:12.278902] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.783 [2024-05-15 11:47:12.289273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.783 qpair failed and we were unable to recover it. 00:23:41.783 [2024-05-15 11:47:12.298951] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.783 [2024-05-15 11:47:12.298994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.783 [2024-05-15 11:47:12.299010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.783 [2024-05-15 11:47:12.299020] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.783 [2024-05-15 11:47:12.299029] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.783 [2024-05-15 11:47:12.309369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.783 qpair failed and we were unable to recover it. 00:23:41.783 [2024-05-15 11:47:12.318952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.783 [2024-05-15 11:47:12.318991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.783 [2024-05-15 11:47:12.319007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.783 [2024-05-15 11:47:12.319016] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.783 [2024-05-15 11:47:12.319025] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.783 [2024-05-15 11:47:12.329343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.783 qpair failed and we were unable to recover it. 00:23:41.783 [2024-05-15 11:47:12.339108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.783 [2024-05-15 11:47:12.339144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.783 [2024-05-15 11:47:12.339159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.783 [2024-05-15 11:47:12.339169] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.783 [2024-05-15 11:47:12.339178] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.783 [2024-05-15 11:47:12.349497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.783 qpair failed and we were unable to recover it. 00:23:41.783 [2024-05-15 11:47:12.359074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.783 [2024-05-15 11:47:12.359112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.783 [2024-05-15 11:47:12.359130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.783 [2024-05-15 11:47:12.359139] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.783 [2024-05-15 11:47:12.359148] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.783 [2024-05-15 11:47:12.369569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.783 qpair failed and we were unable to recover it. 00:23:41.783 [2024-05-15 11:47:12.379096] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.783 [2024-05-15 11:47:12.379138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.783 [2024-05-15 11:47:12.379155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.783 [2024-05-15 11:47:12.379165] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.783 [2024-05-15 11:47:12.379174] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.783 [2024-05-15 11:47:12.389818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.783 qpair failed and we were unable to recover it. 00:23:41.783 [2024-05-15 11:47:12.399207] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.783 [2024-05-15 11:47:12.399244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.783 [2024-05-15 11:47:12.399260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.783 [2024-05-15 11:47:12.399273] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.783 [2024-05-15 11:47:12.399282] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.783 [2024-05-15 11:47:12.409683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.783 qpair failed and we were unable to recover it. 00:23:41.783 [2024-05-15 11:47:12.419350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.784 [2024-05-15 11:47:12.419394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.784 [2024-05-15 11:47:12.419410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.784 [2024-05-15 11:47:12.419419] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.784 [2024-05-15 11:47:12.419428] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.784 [2024-05-15 11:47:12.429790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.784 qpair failed and we were unable to recover it. 00:23:41.784 [2024-05-15 11:47:12.439380] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.784 [2024-05-15 11:47:12.439415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.784 [2024-05-15 11:47:12.439430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.784 [2024-05-15 11:47:12.439440] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.784 [2024-05-15 11:47:12.439448] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.784 [2024-05-15 11:47:12.449705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.784 qpair failed and we were unable to recover it. 00:23:41.784 [2024-05-15 11:47:12.459406] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.784 [2024-05-15 11:47:12.459446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.784 [2024-05-15 11:47:12.459463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.784 [2024-05-15 11:47:12.459472] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.784 [2024-05-15 11:47:12.459481] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.784 [2024-05-15 11:47:12.469729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.784 qpair failed and we were unable to recover it. 00:23:41.784 [2024-05-15 11:47:12.479491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.784 [2024-05-15 11:47:12.479535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.784 [2024-05-15 11:47:12.479552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.784 [2024-05-15 11:47:12.479561] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.784 [2024-05-15 11:47:12.479570] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.784 [2024-05-15 11:47:12.489961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.784 qpair failed and we were unable to recover it. 00:23:41.784 [2024-05-15 11:47:12.499544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.784 [2024-05-15 11:47:12.499585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.784 [2024-05-15 11:47:12.499601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.784 [2024-05-15 11:47:12.499611] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.784 [2024-05-15 11:47:12.499620] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.784 [2024-05-15 11:47:12.509848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.784 qpair failed and we were unable to recover it. 00:23:41.784 [2024-05-15 11:47:12.519588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.784 [2024-05-15 11:47:12.519630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.784 [2024-05-15 11:47:12.519648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.784 [2024-05-15 11:47:12.519658] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.784 [2024-05-15 11:47:12.519666] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:41.784 [2024-05-15 11:47:12.530001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:41.784 qpair failed and we were unable to recover it. 00:23:41.784 [2024-05-15 11:47:12.539534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:41.784 [2024-05-15 11:47:12.539573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:41.784 [2024-05-15 11:47:12.539589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:41.784 [2024-05-15 11:47:12.539598] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:41.784 [2024-05-15 11:47:12.539607] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.043 [2024-05-15 11:47:12.550101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.043 qpair failed and we were unable to recover it. 00:23:42.043 [2024-05-15 11:47:12.559587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.043 [2024-05-15 11:47:12.559624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.043 [2024-05-15 11:47:12.559641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.043 [2024-05-15 11:47:12.559651] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.043 [2024-05-15 11:47:12.559660] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.044 [2024-05-15 11:47:12.569993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.044 qpair failed and we were unable to recover it. 00:23:42.044 [2024-05-15 11:47:12.579750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.044 [2024-05-15 11:47:12.579792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.044 [2024-05-15 11:47:12.579811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.044 [2024-05-15 11:47:12.579821] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.044 [2024-05-15 11:47:12.579829] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.044 [2024-05-15 11:47:12.590135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.044 qpair failed and we were unable to recover it. 00:23:42.044 [2024-05-15 11:47:12.599767] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.044 [2024-05-15 11:47:12.599809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.044 [2024-05-15 11:47:12.599826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.044 [2024-05-15 11:47:12.599835] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.044 [2024-05-15 11:47:12.599843] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.044 [2024-05-15 11:47:12.610121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.044 qpair failed and we were unable to recover it. 00:23:42.044 [2024-05-15 11:47:12.619805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.044 [2024-05-15 11:47:12.619844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.044 [2024-05-15 11:47:12.619860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.044 [2024-05-15 11:47:12.619869] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.044 [2024-05-15 11:47:12.619878] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.044 [2024-05-15 11:47:12.630283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.044 qpair failed and we were unable to recover it. 00:23:42.044 [2024-05-15 11:47:12.639847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.044 [2024-05-15 11:47:12.639896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.044 [2024-05-15 11:47:12.639912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.044 [2024-05-15 11:47:12.639921] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.044 [2024-05-15 11:47:12.639930] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.044 [2024-05-15 11:47:12.650275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.044 qpair failed and we were unable to recover it. 00:23:42.044 [2024-05-15 11:47:12.659893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.044 [2024-05-15 11:47:12.659932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.044 [2024-05-15 11:47:12.659949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.044 [2024-05-15 11:47:12.659959] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.044 [2024-05-15 11:47:12.659970] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.044 [2024-05-15 11:47:12.670354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.044 qpair failed and we were unable to recover it. 00:23:42.044 [2024-05-15 11:47:12.680021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.044 [2024-05-15 11:47:12.680071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.044 [2024-05-15 11:47:12.680088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.044 [2024-05-15 11:47:12.680097] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.044 [2024-05-15 11:47:12.680106] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.044 [2024-05-15 11:47:12.690338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.044 qpair failed and we were unable to recover it. 00:23:42.044 [2024-05-15 11:47:12.700110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.044 [2024-05-15 11:47:12.700149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.044 [2024-05-15 11:47:12.700165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.044 [2024-05-15 11:47:12.700174] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.044 [2024-05-15 11:47:12.700183] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.044 [2024-05-15 11:47:12.710437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.044 qpair failed and we were unable to recover it. 00:23:42.044 [2024-05-15 11:47:12.720185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.044 [2024-05-15 11:47:12.720229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.044 [2024-05-15 11:47:12.720245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.044 [2024-05-15 11:47:12.720255] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.044 [2024-05-15 11:47:12.720263] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.044 [2024-05-15 11:47:12.730575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.044 qpair failed and we were unable to recover it. 00:23:42.044 [2024-05-15 11:47:12.740272] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.044 [2024-05-15 11:47:12.740308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.044 [2024-05-15 11:47:12.740324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.044 [2024-05-15 11:47:12.740334] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.044 [2024-05-15 11:47:12.740342] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.044 [2024-05-15 11:47:12.750628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.044 qpair failed and we were unable to recover it. 00:23:42.044 [2024-05-15 11:47:12.760279] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.044 [2024-05-15 11:47:12.760316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.044 [2024-05-15 11:47:12.760333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.044 [2024-05-15 11:47:12.760343] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.044 [2024-05-15 11:47:12.760352] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.044 [2024-05-15 11:47:12.770667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.044 qpair failed and we were unable to recover it. 00:23:42.044 [2024-05-15 11:47:12.780308] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.044 [2024-05-15 11:47:12.780346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.044 [2024-05-15 11:47:12.780362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.044 [2024-05-15 11:47:12.780371] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.044 [2024-05-15 11:47:12.780379] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.044 [2024-05-15 11:47:12.790707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.044 qpair failed and we were unable to recover it. 00:23:42.044 [2024-05-15 11:47:12.800432] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.044 [2024-05-15 11:47:12.800472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.044 [2024-05-15 11:47:12.800488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.044 [2024-05-15 11:47:12.800497] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.044 [2024-05-15 11:47:12.800505] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.305 [2024-05-15 11:47:12.810760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.305 qpair failed and we were unable to recover it. 00:23:42.305 [2024-05-15 11:47:12.820555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.305 [2024-05-15 11:47:12.820588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.305 [2024-05-15 11:47:12.820604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.305 [2024-05-15 11:47:12.820613] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.305 [2024-05-15 11:47:12.820622] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.305 [2024-05-15 11:47:12.830915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.305 qpair failed and we were unable to recover it. 00:23:42.305 [2024-05-15 11:47:12.840509] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.305 [2024-05-15 11:47:12.840548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.305 [2024-05-15 11:47:12.840564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.305 [2024-05-15 11:47:12.840576] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.305 [2024-05-15 11:47:12.840585] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.305 [2024-05-15 11:47:12.850795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.305 qpair failed and we were unable to recover it. 00:23:42.305 [2024-05-15 11:47:12.860623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.305 [2024-05-15 11:47:12.860665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.305 [2024-05-15 11:47:12.860681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.305 [2024-05-15 11:47:12.860690] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.305 [2024-05-15 11:47:12.860699] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.305 [2024-05-15 11:47:12.870887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.305 qpair failed and we were unable to recover it. 00:23:42.305 [2024-05-15 11:47:12.880671] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.305 [2024-05-15 11:47:12.880713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.305 [2024-05-15 11:47:12.880729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.305 [2024-05-15 11:47:12.880738] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.305 [2024-05-15 11:47:12.880747] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.305 [2024-05-15 11:47:12.890876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.305 qpair failed and we were unable to recover it. 00:23:42.305 [2024-05-15 11:47:12.900739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.305 [2024-05-15 11:47:12.900774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.305 [2024-05-15 11:47:12.900790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.305 [2024-05-15 11:47:12.900799] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.305 [2024-05-15 11:47:12.900808] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.305 [2024-05-15 11:47:12.911023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.305 qpair failed and we were unable to recover it. 00:23:42.305 [2024-05-15 11:47:12.920852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.305 [2024-05-15 11:47:12.920890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.305 [2024-05-15 11:47:12.920906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.305 [2024-05-15 11:47:12.920915] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.305 [2024-05-15 11:47:12.920924] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.305 [2024-05-15 11:47:12.931039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.305 qpair failed and we were unable to recover it. 00:23:42.305 [2024-05-15 11:47:12.940905] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.305 [2024-05-15 11:47:12.940945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.305 [2024-05-15 11:47:12.940962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.305 [2024-05-15 11:47:12.940971] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.305 [2024-05-15 11:47:12.940980] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.306 [2024-05-15 11:47:12.951163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.306 qpair failed and we were unable to recover it. 00:23:42.306 [2024-05-15 11:47:12.960924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.306 [2024-05-15 11:47:12.960968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.306 [2024-05-15 11:47:12.960984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.306 [2024-05-15 11:47:12.960994] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.306 [2024-05-15 11:47:12.961003] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.306 [2024-05-15 11:47:12.971311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.306 qpair failed and we were unable to recover it. 00:23:42.306 [2024-05-15 11:47:12.980948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.306 [2024-05-15 11:47:12.980982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.306 [2024-05-15 11:47:12.980998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.306 [2024-05-15 11:47:12.981007] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.306 [2024-05-15 11:47:12.981016] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.306 [2024-05-15 11:47:12.991260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.306 qpair failed and we were unable to recover it. 00:23:42.306 [2024-05-15 11:47:13.001008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.306 [2024-05-15 11:47:13.001044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.306 [2024-05-15 11:47:13.001066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.306 [2024-05-15 11:47:13.001076] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.306 [2024-05-15 11:47:13.001085] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.306 [2024-05-15 11:47:13.011214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.306 qpair failed and we were unable to recover it. 00:23:42.306 [2024-05-15 11:47:13.021137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.306 [2024-05-15 11:47:13.021178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.306 [2024-05-15 11:47:13.021197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.306 [2024-05-15 11:47:13.021206] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.306 [2024-05-15 11:47:13.021215] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.306 [2024-05-15 11:47:13.031434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.306 qpair failed and we were unable to recover it. 00:23:42.306 [2024-05-15 11:47:13.041127] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.306 [2024-05-15 11:47:13.041165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.306 [2024-05-15 11:47:13.041181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.306 [2024-05-15 11:47:13.041191] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.306 [2024-05-15 11:47:13.041199] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.306 [2024-05-15 11:47:13.051416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.306 qpair failed and we were unable to recover it. 00:23:42.306 [2024-05-15 11:47:13.061110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.306 [2024-05-15 11:47:13.061149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.306 [2024-05-15 11:47:13.061165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.306 [2024-05-15 11:47:13.061175] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.306 [2024-05-15 11:47:13.061183] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.576 [2024-05-15 11:47:13.071516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.576 qpair failed and we were unable to recover it. 00:23:42.576 [2024-05-15 11:47:13.081228] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.577 [2024-05-15 11:47:13.081266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.577 [2024-05-15 11:47:13.081282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.577 [2024-05-15 11:47:13.081291] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.577 [2024-05-15 11:47:13.081300] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.577 [2024-05-15 11:47:13.091624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.577 qpair failed and we were unable to recover it. 00:23:42.577 [2024-05-15 11:47:13.101319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.577 [2024-05-15 11:47:13.101357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.577 [2024-05-15 11:47:13.101373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.577 [2024-05-15 11:47:13.101382] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.577 [2024-05-15 11:47:13.101395] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.577 [2024-05-15 11:47:13.111648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.577 qpair failed and we were unable to recover it. 00:23:42.577 [2024-05-15 11:47:13.121419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.577 [2024-05-15 11:47:13.121458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.577 [2024-05-15 11:47:13.121474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.577 [2024-05-15 11:47:13.121483] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.577 [2024-05-15 11:47:13.121492] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.577 [2024-05-15 11:47:13.131652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.577 qpair failed and we were unable to recover it. 00:23:42.577 [2024-05-15 11:47:13.141480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.577 [2024-05-15 11:47:13.141513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.577 [2024-05-15 11:47:13.141529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.577 [2024-05-15 11:47:13.141538] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.577 [2024-05-15 11:47:13.141547] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.577 [2024-05-15 11:47:13.151795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.577 qpair failed and we were unable to recover it. 00:23:42.577 [2024-05-15 11:47:13.161562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.577 [2024-05-15 11:47:13.161599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.577 [2024-05-15 11:47:13.161615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.577 [2024-05-15 11:47:13.161625] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.577 [2024-05-15 11:47:13.161634] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.577 [2024-05-15 11:47:13.171823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.577 qpair failed and we were unable to recover it. 00:23:42.577 [2024-05-15 11:47:13.181602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.577 [2024-05-15 11:47:13.181641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.577 [2024-05-15 11:47:13.181657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.577 [2024-05-15 11:47:13.181666] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.577 [2024-05-15 11:47:13.181675] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.577 [2024-05-15 11:47:13.191831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.577 qpair failed and we were unable to recover it. 00:23:42.577 [2024-05-15 11:47:13.201586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.577 [2024-05-15 11:47:13.201627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.577 [2024-05-15 11:47:13.201643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.577 [2024-05-15 11:47:13.201653] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.577 [2024-05-15 11:47:13.201662] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.577 [2024-05-15 11:47:13.211931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.577 qpair failed and we were unable to recover it. 00:23:42.577 [2024-05-15 11:47:13.221670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.577 [2024-05-15 11:47:13.221711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.577 [2024-05-15 11:47:13.221727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.577 [2024-05-15 11:47:13.221736] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.577 [2024-05-15 11:47:13.221745] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.577 [2024-05-15 11:47:13.231963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.577 qpair failed and we were unable to recover it. 00:23:42.577 [2024-05-15 11:47:13.241810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.577 [2024-05-15 11:47:13.241846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.577 [2024-05-15 11:47:13.241862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.577 [2024-05-15 11:47:13.241871] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.577 [2024-05-15 11:47:13.241880] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.577 [2024-05-15 11:47:13.252150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.577 qpair failed and we were unable to recover it. 00:23:42.577 [2024-05-15 11:47:13.261822] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.577 [2024-05-15 11:47:13.261861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.577 [2024-05-15 11:47:13.261878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.577 [2024-05-15 11:47:13.261887] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.577 [2024-05-15 11:47:13.261895] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.577 [2024-05-15 11:47:13.272102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.577 qpair failed and we were unable to recover it. 00:23:42.577 [2024-05-15 11:47:13.281886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.577 [2024-05-15 11:47:13.281929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.577 [2024-05-15 11:47:13.281945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.577 [2024-05-15 11:47:13.281958] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.577 [2024-05-15 11:47:13.281967] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.577 [2024-05-15 11:47:13.292106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.577 qpair failed and we were unable to recover it. 00:23:42.577 [2024-05-15 11:47:13.301868] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.577 [2024-05-15 11:47:13.301903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.577 [2024-05-15 11:47:13.301919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.577 [2024-05-15 11:47:13.301928] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.577 [2024-05-15 11:47:13.301937] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.577 [2024-05-15 11:47:13.312309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.577 qpair failed and we were unable to recover it. 00:23:42.577 [2024-05-15 11:47:13.321960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.577 [2024-05-15 11:47:13.322002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.577 [2024-05-15 11:47:13.322018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.577 [2024-05-15 11:47:13.322027] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.577 [2024-05-15 11:47:13.322037] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.577 [2024-05-15 11:47:13.332343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.577 qpair failed and we were unable to recover it. 00:23:42.841 [2024-05-15 11:47:13.341980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.841 [2024-05-15 11:47:13.342021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.841 [2024-05-15 11:47:13.342037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.841 [2024-05-15 11:47:13.342046] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.841 [2024-05-15 11:47:13.342066] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.841 [2024-05-15 11:47:13.352447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.841 qpair failed and we were unable to recover it. 00:23:42.841 [2024-05-15 11:47:13.362187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.841 [2024-05-15 11:47:13.362224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.841 [2024-05-15 11:47:13.362241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.841 [2024-05-15 11:47:13.362250] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.841 [2024-05-15 11:47:13.362259] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.841 [2024-05-15 11:47:13.372518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.841 qpair failed and we were unable to recover it. 00:23:42.841 [2024-05-15 11:47:13.382187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.841 [2024-05-15 11:47:13.382225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.841 [2024-05-15 11:47:13.382241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.841 [2024-05-15 11:47:13.382251] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.841 [2024-05-15 11:47:13.382260] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.841 [2024-05-15 11:47:13.392453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.841 qpair failed and we were unable to recover it. 00:23:42.841 [2024-05-15 11:47:13.402292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.841 [2024-05-15 11:47:13.402330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.841 [2024-05-15 11:47:13.402346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.841 [2024-05-15 11:47:13.402355] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.841 [2024-05-15 11:47:13.402364] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.841 [2024-05-15 11:47:13.412657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.841 qpair failed and we were unable to recover it. 00:23:42.841 [2024-05-15 11:47:13.422293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.842 [2024-05-15 11:47:13.422333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.842 [2024-05-15 11:47:13.422349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.842 [2024-05-15 11:47:13.422359] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.842 [2024-05-15 11:47:13.422367] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.842 [2024-05-15 11:47:13.432503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.842 qpair failed and we were unable to recover it. 00:23:42.842 [2024-05-15 11:47:13.442484] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.842 [2024-05-15 11:47:13.442528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.842 [2024-05-15 11:47:13.442544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.842 [2024-05-15 11:47:13.442554] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.842 [2024-05-15 11:47:13.442563] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.842 [2024-05-15 11:47:13.452621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.842 qpair failed and we were unable to recover it. 00:23:42.842 [2024-05-15 11:47:13.462509] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.842 [2024-05-15 11:47:13.462547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.842 [2024-05-15 11:47:13.462568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.842 [2024-05-15 11:47:13.462578] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.842 [2024-05-15 11:47:13.462587] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.842 [2024-05-15 11:47:13.472671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.842 qpair failed and we were unable to recover it. 00:23:42.842 [2024-05-15 11:47:13.482534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.842 [2024-05-15 11:47:13.482570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.842 [2024-05-15 11:47:13.482586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.842 [2024-05-15 11:47:13.482596] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.842 [2024-05-15 11:47:13.482604] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.842 [2024-05-15 11:47:13.492675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.842 qpair failed and we were unable to recover it. 00:23:42.842 [2024-05-15 11:47:13.502649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.842 [2024-05-15 11:47:13.502693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.842 [2024-05-15 11:47:13.502709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.842 [2024-05-15 11:47:13.502718] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.842 [2024-05-15 11:47:13.502727] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.842 [2024-05-15 11:47:13.512933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.842 qpair failed and we were unable to recover it. 00:23:42.842 [2024-05-15 11:47:13.522663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.842 [2024-05-15 11:47:13.522701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.842 [2024-05-15 11:47:13.522717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.842 [2024-05-15 11:47:13.522726] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.842 [2024-05-15 11:47:13.522734] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.842 [2024-05-15 11:47:13.532790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.842 qpair failed and we were unable to recover it. 00:23:42.842 [2024-05-15 11:47:13.542717] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.842 [2024-05-15 11:47:13.542759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.842 [2024-05-15 11:47:13.542776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.842 [2024-05-15 11:47:13.542785] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.842 [2024-05-15 11:47:13.542797] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.842 [2024-05-15 11:47:13.552992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.842 qpair failed and we were unable to recover it. 00:23:42.842 [2024-05-15 11:47:13.562768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.842 [2024-05-15 11:47:13.562806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.842 [2024-05-15 11:47:13.562822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.842 [2024-05-15 11:47:13.562831] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.842 [2024-05-15 11:47:13.562840] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.842 [2024-05-15 11:47:13.573082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.842 qpair failed and we were unable to recover it. 00:23:42.842 [2024-05-15 11:47:13.582779] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.842 [2024-05-15 11:47:13.582818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.842 [2024-05-15 11:47:13.582834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.842 [2024-05-15 11:47:13.582843] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.842 [2024-05-15 11:47:13.582852] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:42.842 [2024-05-15 11:47:13.593108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.842 qpair failed and we were unable to recover it. 00:23:42.842 [2024-05-15 11:47:13.602885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:42.842 [2024-05-15 11:47:13.602926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:42.842 [2024-05-15 11:47:13.602942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:42.842 [2024-05-15 11:47:13.602951] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:42.842 [2024-05-15 11:47:13.602961] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.102 [2024-05-15 11:47:13.613196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.102 qpair failed and we were unable to recover it. 00:23:43.102 [2024-05-15 11:47:13.623007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.102 [2024-05-15 11:47:13.623050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.102 [2024-05-15 11:47:13.623071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.102 [2024-05-15 11:47:13.623081] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.102 [2024-05-15 11:47:13.623090] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.102 [2024-05-15 11:47:13.633428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.102 qpair failed and we were unable to recover it. 00:23:43.102 [2024-05-15 11:47:13.643020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.102 [2024-05-15 11:47:13.643069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.102 [2024-05-15 11:47:13.643086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.102 [2024-05-15 11:47:13.643095] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.102 [2024-05-15 11:47:13.643104] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.102 [2024-05-15 11:47:13.653410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.102 qpair failed and we were unable to recover it. 00:23:43.102 [2024-05-15 11:47:13.663083] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.102 [2024-05-15 11:47:13.663121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.102 [2024-05-15 11:47:13.663138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.102 [2024-05-15 11:47:13.663147] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.102 [2024-05-15 11:47:13.663156] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.102 [2024-05-15 11:47:13.673367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.102 qpair failed and we were unable to recover it. 00:23:43.102 [2024-05-15 11:47:13.683087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.102 [2024-05-15 11:47:13.683127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.102 [2024-05-15 11:47:13.683144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.102 [2024-05-15 11:47:13.683153] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.102 [2024-05-15 11:47:13.683162] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.102 [2024-05-15 11:47:13.693475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.102 qpair failed and we were unable to recover it. 00:23:43.102 [2024-05-15 11:47:13.703249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.102 [2024-05-15 11:47:13.703286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.102 [2024-05-15 11:47:13.703301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.102 [2024-05-15 11:47:13.703311] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.102 [2024-05-15 11:47:13.703319] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.102 [2024-05-15 11:47:13.713616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.102 qpair failed and we were unable to recover it. 00:23:43.102 [2024-05-15 11:47:13.723255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.102 [2024-05-15 11:47:13.723292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.102 [2024-05-15 11:47:13.723308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.102 [2024-05-15 11:47:13.723321] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.102 [2024-05-15 11:47:13.723330] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.102 [2024-05-15 11:47:13.733701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.102 qpair failed and we were unable to recover it. 00:23:43.102 [2024-05-15 11:47:13.743453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.102 [2024-05-15 11:47:13.743492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.102 [2024-05-15 11:47:13.743510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.102 [2024-05-15 11:47:13.743520] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.102 [2024-05-15 11:47:13.743529] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.102 [2024-05-15 11:47:13.753663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.102 qpair failed and we were unable to recover it. 00:23:43.102 [2024-05-15 11:47:13.763325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.102 [2024-05-15 11:47:13.763364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.102 [2024-05-15 11:47:13.763382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.102 [2024-05-15 11:47:13.763392] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.102 [2024-05-15 11:47:13.763401] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.102 [2024-05-15 11:47:13.773737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.102 qpair failed and we were unable to recover it. 00:23:43.102 [2024-05-15 11:47:13.783405] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.102 [2024-05-15 11:47:13.783444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.102 [2024-05-15 11:47:13.783459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.102 [2024-05-15 11:47:13.783469] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.102 [2024-05-15 11:47:13.783478] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.102 [2024-05-15 11:47:13.793785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.102 qpair failed and we were unable to recover it. 00:23:43.102 [2024-05-15 11:47:13.803420] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.103 [2024-05-15 11:47:13.803459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.103 [2024-05-15 11:47:13.803475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.103 [2024-05-15 11:47:13.803484] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.103 [2024-05-15 11:47:13.803492] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.103 [2024-05-15 11:47:13.813830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.103 qpair failed and we were unable to recover it. 00:23:43.103 [2024-05-15 11:47:13.823527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.103 [2024-05-15 11:47:13.823568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.103 [2024-05-15 11:47:13.823585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.103 [2024-05-15 11:47:13.823594] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.103 [2024-05-15 11:47:13.823603] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.103 [2024-05-15 11:47:13.833808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.103 qpair failed and we were unable to recover it. 00:23:43.103 [2024-05-15 11:47:13.843578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.103 [2024-05-15 11:47:13.843615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.103 [2024-05-15 11:47:13.843631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.103 [2024-05-15 11:47:13.843640] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.103 [2024-05-15 11:47:13.843649] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.103 [2024-05-15 11:47:13.853990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.103 qpair failed and we were unable to recover it. 00:23:43.103 [2024-05-15 11:47:13.863660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.103 [2024-05-15 11:47:13.863697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.103 [2024-05-15 11:47:13.863713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.103 [2024-05-15 11:47:13.863722] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.103 [2024-05-15 11:47:13.863731] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.364 [2024-05-15 11:47:13.873935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.364 qpair failed and we were unable to recover it. 00:23:43.364 [2024-05-15 11:47:13.883672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.364 [2024-05-15 11:47:13.883710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.364 [2024-05-15 11:47:13.883726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.364 [2024-05-15 11:47:13.883735] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.364 [2024-05-15 11:47:13.883744] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.364 [2024-05-15 11:47:13.894168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.364 qpair failed and we were unable to recover it. 00:23:43.364 [2024-05-15 11:47:13.903769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.364 [2024-05-15 11:47:13.903809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.364 [2024-05-15 11:47:13.903828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.364 [2024-05-15 11:47:13.903838] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.364 [2024-05-15 11:47:13.903846] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.364 [2024-05-15 11:47:13.914074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.364 qpair failed and we were unable to recover it. 00:23:43.364 [2024-05-15 11:47:13.923841] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.364 [2024-05-15 11:47:13.923879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.364 [2024-05-15 11:47:13.923895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.364 [2024-05-15 11:47:13.923904] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.364 [2024-05-15 11:47:13.923913] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.364 [2024-05-15 11:47:13.934323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.364 qpair failed and we were unable to recover it. 00:23:43.364 [2024-05-15 11:47:13.943836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.365 [2024-05-15 11:47:13.943870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.365 [2024-05-15 11:47:13.943885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.365 [2024-05-15 11:47:13.943894] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.365 [2024-05-15 11:47:13.943903] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.365 [2024-05-15 11:47:13.954274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.365 qpair failed and we were unable to recover it. 00:23:43.365 [2024-05-15 11:47:13.963876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.365 [2024-05-15 11:47:13.963916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.365 [2024-05-15 11:47:13.963932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.365 [2024-05-15 11:47:13.963942] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.365 [2024-05-15 11:47:13.963950] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.365 [2024-05-15 11:47:13.974231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.365 qpair failed and we were unable to recover it. 00:23:43.365 [2024-05-15 11:47:13.983863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.365 [2024-05-15 11:47:13.983902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.365 [2024-05-15 11:47:13.983918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.365 [2024-05-15 11:47:13.983927] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.365 [2024-05-15 11:47:13.983939] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.365 [2024-05-15 11:47:13.994334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.365 qpair failed and we were unable to recover it. 00:23:43.365 [2024-05-15 11:47:14.004025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.365 [2024-05-15 11:47:14.004077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.365 [2024-05-15 11:47:14.004093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.365 [2024-05-15 11:47:14.004102] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.365 [2024-05-15 11:47:14.004111] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.365 [2024-05-15 11:47:14.014469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.365 qpair failed and we were unable to recover it. 00:23:43.365 [2024-05-15 11:47:14.023981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.365 [2024-05-15 11:47:14.024022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.365 [2024-05-15 11:47:14.024038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.365 [2024-05-15 11:47:14.024047] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.365 [2024-05-15 11:47:14.024060] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.365 [2024-05-15 11:47:14.034411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.365 qpair failed and we were unable to recover it. 00:23:43.365 [2024-05-15 11:47:14.044134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.365 [2024-05-15 11:47:14.044174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.365 [2024-05-15 11:47:14.044190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.365 [2024-05-15 11:47:14.044199] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.365 [2024-05-15 11:47:14.044208] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.365 [2024-05-15 11:47:14.054431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.365 qpair failed and we were unable to recover it. 00:23:43.365 [2024-05-15 11:47:14.064199] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.365 [2024-05-15 11:47:14.064236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.365 [2024-05-15 11:47:14.064253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.365 [2024-05-15 11:47:14.064262] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.365 [2024-05-15 11:47:14.064270] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.365 [2024-05-15 11:47:14.074638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.365 qpair failed and we were unable to recover it. 00:23:43.365 [2024-05-15 11:47:14.084271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.365 [2024-05-15 11:47:14.084318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.365 [2024-05-15 11:47:14.084335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.365 [2024-05-15 11:47:14.084344] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.365 [2024-05-15 11:47:14.084353] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.365 [2024-05-15 11:47:14.094790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.365 qpair failed and we were unable to recover it. 00:23:43.365 [2024-05-15 11:47:14.104296] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.365 [2024-05-15 11:47:14.104338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.365 [2024-05-15 11:47:14.104354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.365 [2024-05-15 11:47:14.104364] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.365 [2024-05-15 11:47:14.104372] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.365 [2024-05-15 11:47:14.114605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.365 qpair failed and we were unable to recover it. 00:23:43.365 [2024-05-15 11:47:14.124407] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.365 [2024-05-15 11:47:14.124446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.365 [2024-05-15 11:47:14.124462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.365 [2024-05-15 11:47:14.124471] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.365 [2024-05-15 11:47:14.124480] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.626 [2024-05-15 11:47:14.134767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.626 qpair failed and we were unable to recover it. 00:23:43.626 [2024-05-15 11:47:14.144410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.626 [2024-05-15 11:47:14.144452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.626 [2024-05-15 11:47:14.144468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.626 [2024-05-15 11:47:14.144478] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.626 [2024-05-15 11:47:14.144487] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.626 [2024-05-15 11:47:14.154808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.626 qpair failed and we were unable to recover it. 00:23:43.626 [2024-05-15 11:47:14.164524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.626 [2024-05-15 11:47:14.164563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.626 [2024-05-15 11:47:14.164580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.626 [2024-05-15 11:47:14.164593] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.626 [2024-05-15 11:47:14.164602] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.626 [2024-05-15 11:47:14.174641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.626 qpair failed and we were unable to recover it. 00:23:43.626 [2024-05-15 11:47:14.184579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.626 [2024-05-15 11:47:14.184616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.626 [2024-05-15 11:47:14.184632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.626 [2024-05-15 11:47:14.184641] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.626 [2024-05-15 11:47:14.184650] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.626 [2024-05-15 11:47:14.194916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.626 qpair failed and we were unable to recover it. 00:23:43.626 [2024-05-15 11:47:14.204605] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.626 [2024-05-15 11:47:14.204638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.626 [2024-05-15 11:47:14.204654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.626 [2024-05-15 11:47:14.204664] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.626 [2024-05-15 11:47:14.204672] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.626 [2024-05-15 11:47:14.215003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.626 qpair failed and we were unable to recover it. 00:23:43.626 [2024-05-15 11:47:14.224633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.626 [2024-05-15 11:47:14.224670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.626 [2024-05-15 11:47:14.224686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.626 [2024-05-15 11:47:14.224696] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.626 [2024-05-15 11:47:14.224704] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.626 [2024-05-15 11:47:14.235091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.626 qpair failed and we were unable to recover it. 00:23:43.626 [2024-05-15 11:47:14.244773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.626 [2024-05-15 11:47:14.244811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.626 [2024-05-15 11:47:14.244826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.626 [2024-05-15 11:47:14.244835] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.626 [2024-05-15 11:47:14.244844] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.626 [2024-05-15 11:47:14.255088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.626 qpair failed and we were unable to recover it. 00:23:43.626 [2024-05-15 11:47:14.264830] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.626 [2024-05-15 11:47:14.264871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.626 [2024-05-15 11:47:14.264887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.626 [2024-05-15 11:47:14.264896] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.626 [2024-05-15 11:47:14.264905] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.626 [2024-05-15 11:47:14.275106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.626 qpair failed and we were unable to recover it. 00:23:43.626 [2024-05-15 11:47:14.284813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.626 [2024-05-15 11:47:14.284852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.626 [2024-05-15 11:47:14.284868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.626 [2024-05-15 11:47:14.284877] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.626 [2024-05-15 11:47:14.284886] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.626 [2024-05-15 11:47:14.295175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.626 qpair failed and we were unable to recover it. 00:23:43.626 [2024-05-15 11:47:14.304945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.626 [2024-05-15 11:47:14.304983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.626 [2024-05-15 11:47:14.304999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.626 [2024-05-15 11:47:14.305008] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.626 [2024-05-15 11:47:14.305017] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.627 [2024-05-15 11:47:14.315128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.627 qpair failed and we were unable to recover it. 00:23:43.627 [2024-05-15 11:47:14.325041] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.627 [2024-05-15 11:47:14.325079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.627 [2024-05-15 11:47:14.325095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.627 [2024-05-15 11:47:14.325104] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.627 [2024-05-15 11:47:14.325113] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.627 [2024-05-15 11:47:14.335398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.627 qpair failed and we were unable to recover it. 00:23:43.627 [2024-05-15 11:47:14.345065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.627 [2024-05-15 11:47:14.345102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.627 [2024-05-15 11:47:14.345121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.627 [2024-05-15 11:47:14.345130] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.627 [2024-05-15 11:47:14.345139] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.627 [2024-05-15 11:47:14.355407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.627 qpair failed and we were unable to recover it. 00:23:43.627 [2024-05-15 11:47:14.365134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.627 [2024-05-15 11:47:14.365172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.627 [2024-05-15 11:47:14.365188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.627 [2024-05-15 11:47:14.365198] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.627 [2024-05-15 11:47:14.365207] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.627 [2024-05-15 11:47:14.375609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.627 qpair failed and we were unable to recover it. 00:23:43.627 [2024-05-15 11:47:14.385151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.627 [2024-05-15 11:47:14.385191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.627 [2024-05-15 11:47:14.385206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.627 [2024-05-15 11:47:14.385216] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.627 [2024-05-15 11:47:14.385224] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.888 [2024-05-15 11:47:14.395629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.888 qpair failed and we were unable to recover it. 00:23:43.888 [2024-05-15 11:47:14.405235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.888 [2024-05-15 11:47:14.405276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.888 [2024-05-15 11:47:14.405292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.888 [2024-05-15 11:47:14.405302] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.888 [2024-05-15 11:47:14.405311] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.888 [2024-05-15 11:47:14.415637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.888 qpair failed and we were unable to recover it. 00:23:43.888 [2024-05-15 11:47:14.425174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.888 [2024-05-15 11:47:14.425209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.888 [2024-05-15 11:47:14.425224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.888 [2024-05-15 11:47:14.425234] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.888 [2024-05-15 11:47:14.425245] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.888 [2024-05-15 11:47:14.435712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.888 qpair failed and we were unable to recover it. 00:23:43.888 [2024-05-15 11:47:14.445276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.888 [2024-05-15 11:47:14.445315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.888 [2024-05-15 11:47:14.445331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.888 [2024-05-15 11:47:14.445340] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.888 [2024-05-15 11:47:14.445349] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.888 [2024-05-15 11:47:14.455713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.888 qpair failed and we were unable to recover it. 00:23:43.888 [2024-05-15 11:47:14.465414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.888 [2024-05-15 11:47:14.465455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.888 [2024-05-15 11:47:14.465471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.888 [2024-05-15 11:47:14.465481] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.888 [2024-05-15 11:47:14.465490] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.888 [2024-05-15 11:47:14.475803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.888 qpair failed and we were unable to recover it. 00:23:43.888 [2024-05-15 11:47:14.485448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.888 [2024-05-15 11:47:14.485488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.888 [2024-05-15 11:47:14.485504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.888 [2024-05-15 11:47:14.485513] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.888 [2024-05-15 11:47:14.485522] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.888 [2024-05-15 11:47:14.495666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.888 qpair failed and we were unable to recover it. 00:23:43.888 [2024-05-15 11:47:14.505498] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.888 [2024-05-15 11:47:14.505540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.888 [2024-05-15 11:47:14.505556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.888 [2024-05-15 11:47:14.505566] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.888 [2024-05-15 11:47:14.505574] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.888 [2024-05-15 11:47:14.515827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.888 qpair failed and we were unable to recover it. 00:23:43.888 [2024-05-15 11:47:14.525564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.888 [2024-05-15 11:47:14.525599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.888 [2024-05-15 11:47:14.525616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.888 [2024-05-15 11:47:14.525625] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.888 [2024-05-15 11:47:14.525634] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.888 [2024-05-15 11:47:14.535947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.888 qpair failed and we were unable to recover it. 00:23:43.888 [2024-05-15 11:47:14.545679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.888 [2024-05-15 11:47:14.545720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.888 [2024-05-15 11:47:14.545737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.888 [2024-05-15 11:47:14.545746] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.888 [2024-05-15 11:47:14.545755] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.888 [2024-05-15 11:47:14.555802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.888 qpair failed and we were unable to recover it. 00:23:43.888 [2024-05-15 11:47:14.565676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.888 [2024-05-15 11:47:14.565716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.888 [2024-05-15 11:47:14.565732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.888 [2024-05-15 11:47:14.565741] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.888 [2024-05-15 11:47:14.565750] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.888 [2024-05-15 11:47:14.575938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.888 qpair failed and we were unable to recover it. 00:23:43.888 [2024-05-15 11:47:14.585695] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.888 [2024-05-15 11:47:14.585738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.888 [2024-05-15 11:47:14.585754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.888 [2024-05-15 11:47:14.585764] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.888 [2024-05-15 11:47:14.585772] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.889 [2024-05-15 11:47:14.596070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.889 qpair failed and we were unable to recover it. 00:23:43.889 [2024-05-15 11:47:14.605783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.889 [2024-05-15 11:47:14.605817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.889 [2024-05-15 11:47:14.605833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.889 [2024-05-15 11:47:14.605845] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.889 [2024-05-15 11:47:14.605854] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.889 [2024-05-15 11:47:14.616197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.889 qpair failed and we were unable to recover it. 00:23:43.889 [2024-05-15 11:47:14.625815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.889 [2024-05-15 11:47:14.625854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.889 [2024-05-15 11:47:14.625870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.889 [2024-05-15 11:47:14.625880] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.889 [2024-05-15 11:47:14.625889] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:43.889 [2024-05-15 11:47:14.636079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:43.889 qpair failed and we were unable to recover it. 00:23:43.889 [2024-05-15 11:47:14.645933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:43.889 [2024-05-15 11:47:14.645972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:43.889 [2024-05-15 11:47:14.645987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:43.889 [2024-05-15 11:47:14.645997] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:43.889 [2024-05-15 11:47:14.646006] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.148 [2024-05-15 11:47:14.655984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.148 qpair failed and we were unable to recover it. 00:23:44.148 [2024-05-15 11:47:14.665964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.148 [2024-05-15 11:47:14.665999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.148 [2024-05-15 11:47:14.666015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.148 [2024-05-15 11:47:14.666025] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.148 [2024-05-15 11:47:14.666033] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.148 [2024-05-15 11:47:14.676358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.148 qpair failed and we were unable to recover it. 00:23:44.148 [2024-05-15 11:47:14.686033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.148 [2024-05-15 11:47:14.686070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.148 [2024-05-15 11:47:14.686086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.148 [2024-05-15 11:47:14.686096] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.148 [2024-05-15 11:47:14.686105] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.149 [2024-05-15 11:47:14.696314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.149 qpair failed and we were unable to recover it. 00:23:44.149 [2024-05-15 11:47:14.706061] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.149 [2024-05-15 11:47:14.706101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.149 [2024-05-15 11:47:14.706117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.149 [2024-05-15 11:47:14.706126] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.149 [2024-05-15 11:47:14.706135] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.149 [2024-05-15 11:47:14.716283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.149 qpair failed and we were unable to recover it. 00:23:44.149 [2024-05-15 11:47:14.726084] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.149 [2024-05-15 11:47:14.726125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.149 [2024-05-15 11:47:14.726141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.149 [2024-05-15 11:47:14.726151] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.149 [2024-05-15 11:47:14.726159] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.149 [2024-05-15 11:47:14.736372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.149 qpair failed and we were unable to recover it. 00:23:44.149 [2024-05-15 11:47:14.746174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.149 [2024-05-15 11:47:14.746207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.149 [2024-05-15 11:47:14.746223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.149 [2024-05-15 11:47:14.746233] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.149 [2024-05-15 11:47:14.746241] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.149 [2024-05-15 11:47:14.756544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.149 qpair failed and we were unable to recover it. 00:23:44.149 [2024-05-15 11:47:14.766262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.149 [2024-05-15 11:47:14.766297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.149 [2024-05-15 11:47:14.766314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.149 [2024-05-15 11:47:14.766324] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.149 [2024-05-15 11:47:14.766332] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.149 [2024-05-15 11:47:14.776515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.149 qpair failed and we were unable to recover it. 00:23:44.149 [2024-05-15 11:47:14.786290] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.149 [2024-05-15 11:47:14.786332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.149 [2024-05-15 11:47:14.786351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.149 [2024-05-15 11:47:14.786360] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.149 [2024-05-15 11:47:14.786369] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.149 [2024-05-15 11:47:14.796679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.149 qpair failed and we were unable to recover it. 00:23:44.149 [2024-05-15 11:47:14.806424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.149 [2024-05-15 11:47:14.806463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.149 [2024-05-15 11:47:14.806479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.149 [2024-05-15 11:47:14.806488] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.149 [2024-05-15 11:47:14.806497] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.149 [2024-05-15 11:47:14.816715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.149 qpair failed and we were unable to recover it. 00:23:44.149 [2024-05-15 11:47:14.826401] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.149 [2024-05-15 11:47:14.826437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.149 [2024-05-15 11:47:14.826453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.149 [2024-05-15 11:47:14.826462] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.149 [2024-05-15 11:47:14.826471] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.149 [2024-05-15 11:47:14.836776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.149 qpair failed and we were unable to recover it. 00:23:44.149 [2024-05-15 11:47:14.846411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.149 [2024-05-15 11:47:14.846445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.149 [2024-05-15 11:47:14.846461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.149 [2024-05-15 11:47:14.846471] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.149 [2024-05-15 11:47:14.846480] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.149 [2024-05-15 11:47:14.856849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.149 qpair failed and we were unable to recover it. 00:23:44.149 [2024-05-15 11:47:14.866567] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.149 [2024-05-15 11:47:14.866606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.149 [2024-05-15 11:47:14.866623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.149 [2024-05-15 11:47:14.866633] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.149 [2024-05-15 11:47:14.866645] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.149 [2024-05-15 11:47:14.876893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.149 qpair failed and we were unable to recover it. 00:23:44.149 [2024-05-15 11:47:14.886665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.149 [2024-05-15 11:47:14.886715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.149 [2024-05-15 11:47:14.886730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.149 [2024-05-15 11:47:14.886740] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.149 [2024-05-15 11:47:14.886749] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.149 [2024-05-15 11:47:14.896910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.149 qpair failed and we were unable to recover it. 00:23:44.149 [2024-05-15 11:47:14.906639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.149 [2024-05-15 11:47:14.906676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.149 [2024-05-15 11:47:14.906692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.149 [2024-05-15 11:47:14.906701] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.149 [2024-05-15 11:47:14.906710] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.408 [2024-05-15 11:47:14.917069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.408 qpair failed and we were unable to recover it. 00:23:44.408 [2024-05-15 11:47:14.926721] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.408 [2024-05-15 11:47:14.926764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.409 [2024-05-15 11:47:14.926781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.409 [2024-05-15 11:47:14.926790] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.409 [2024-05-15 11:47:14.926799] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.409 [2024-05-15 11:47:14.937031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.409 qpair failed and we were unable to recover it. 00:23:44.409 [2024-05-15 11:47:14.946774] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.409 [2024-05-15 11:47:14.946815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.409 [2024-05-15 11:47:14.946831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.409 [2024-05-15 11:47:14.946840] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.409 [2024-05-15 11:47:14.946848] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.409 [2024-05-15 11:47:14.957336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.409 qpair failed and we were unable to recover it. 00:23:44.409 [2024-05-15 11:47:14.966868] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.409 [2024-05-15 11:47:14.966914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.409 [2024-05-15 11:47:14.966931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.409 [2024-05-15 11:47:14.966940] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.409 [2024-05-15 11:47:14.966949] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.409 [2024-05-15 11:47:14.977159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.409 qpair failed and we were unable to recover it. 00:23:44.409 [2024-05-15 11:47:14.986976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.409 [2024-05-15 11:47:14.987010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.409 [2024-05-15 11:47:14.987026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.409 [2024-05-15 11:47:14.987035] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.409 [2024-05-15 11:47:14.987044] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.409 [2024-05-15 11:47:14.997244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.409 qpair failed and we were unable to recover it. 00:23:44.409 [2024-05-15 11:47:15.007032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.409 [2024-05-15 11:47:15.007074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.409 [2024-05-15 11:47:15.007090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.409 [2024-05-15 11:47:15.007100] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.409 [2024-05-15 11:47:15.007108] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.409 [2024-05-15 11:47:15.017359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.409 qpair failed and we were unable to recover it. 00:23:44.409 [2024-05-15 11:47:15.027075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.409 [2024-05-15 11:47:15.027115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.409 [2024-05-15 11:47:15.027132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.409 [2024-05-15 11:47:15.027141] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.409 [2024-05-15 11:47:15.027149] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.409 [2024-05-15 11:47:15.037384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.409 qpair failed and we were unable to recover it. 00:23:44.409 [2024-05-15 11:47:15.047025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.409 [2024-05-15 11:47:15.047068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.409 [2024-05-15 11:47:15.047083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.409 [2024-05-15 11:47:15.047096] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.409 [2024-05-15 11:47:15.047105] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.409 [2024-05-15 11:47:15.057553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.409 qpair failed and we were unable to recover it. 00:23:44.409 [2024-05-15 11:47:15.067199] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.409 [2024-05-15 11:47:15.067240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.409 [2024-05-15 11:47:15.067257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.409 [2024-05-15 11:47:15.067266] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.409 [2024-05-15 11:47:15.067275] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.409 [2024-05-15 11:47:15.077356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.409 qpair failed and we were unable to recover it. 00:23:44.409 [2024-05-15 11:47:15.087289] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.409 [2024-05-15 11:47:15.087322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.409 [2024-05-15 11:47:15.087338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.409 [2024-05-15 11:47:15.087347] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.409 [2024-05-15 11:47:15.087356] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.409 [2024-05-15 11:47:15.097608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.409 qpair failed and we were unable to recover it. 00:23:44.409 [2024-05-15 11:47:15.107408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.409 [2024-05-15 11:47:15.107447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.409 [2024-05-15 11:47:15.107463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.409 [2024-05-15 11:47:15.107472] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.409 [2024-05-15 11:47:15.107481] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.409 [2024-05-15 11:47:15.117615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.409 qpair failed and we were unable to recover it. 00:23:44.409 [2024-05-15 11:47:15.127490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.409 [2024-05-15 11:47:15.127534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.409 [2024-05-15 11:47:15.127549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.409 [2024-05-15 11:47:15.127559] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.409 [2024-05-15 11:47:15.127568] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.409 [2024-05-15 11:47:15.137616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.409 qpair failed and we were unable to recover it. 00:23:44.409 [2024-05-15 11:47:15.147482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.409 [2024-05-15 11:47:15.147518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.409 [2024-05-15 11:47:15.147534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.409 [2024-05-15 11:47:15.147543] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.409 [2024-05-15 11:47:15.147551] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.409 [2024-05-15 11:47:15.157649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.409 qpair failed and we were unable to recover it. 00:23:44.409 [2024-05-15 11:47:15.167600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.409 [2024-05-15 11:47:15.167639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.409 [2024-05-15 11:47:15.167656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.409 [2024-05-15 11:47:15.167665] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.409 [2024-05-15 11:47:15.167674] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.668 [2024-05-15 11:47:15.177785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.668 qpair failed and we were unable to recover it. 00:23:44.668 [2024-05-15 11:47:15.187569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.668 [2024-05-15 11:47:15.187609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.668 [2024-05-15 11:47:15.187625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.668 [2024-05-15 11:47:15.187635] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.668 [2024-05-15 11:47:15.187644] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.668 [2024-05-15 11:47:15.197835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.668 qpair failed and we were unable to recover it. 00:23:44.668 [2024-05-15 11:47:15.207662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.668 [2024-05-15 11:47:15.207699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.668 [2024-05-15 11:47:15.207716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.668 [2024-05-15 11:47:15.207725] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.668 [2024-05-15 11:47:15.207734] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.668 [2024-05-15 11:47:15.217817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.668 qpair failed and we were unable to recover it. 00:23:44.668 [2024-05-15 11:47:15.227720] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.668 [2024-05-15 11:47:15.227756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.668 [2024-05-15 11:47:15.227776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.668 [2024-05-15 11:47:15.227786] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.668 [2024-05-15 11:47:15.227794] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.668 [2024-05-15 11:47:15.238074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.668 qpair failed and we were unable to recover it. 00:23:44.668 [2024-05-15 11:47:15.247787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.668 [2024-05-15 11:47:15.247823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.668 [2024-05-15 11:47:15.247839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.668 [2024-05-15 11:47:15.247848] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.668 [2024-05-15 11:47:15.247858] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.668 [2024-05-15 11:47:15.258029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.668 qpair failed and we were unable to recover it. 00:23:44.668 [2024-05-15 11:47:15.267802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.669 [2024-05-15 11:47:15.267840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.669 [2024-05-15 11:47:15.267856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.669 [2024-05-15 11:47:15.267866] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.669 [2024-05-15 11:47:15.267874] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.669 [2024-05-15 11:47:15.278058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.669 qpair failed and we were unable to recover it. 00:23:44.669 [2024-05-15 11:47:15.287957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.669 [2024-05-15 11:47:15.287994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.669 [2024-05-15 11:47:15.288009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.669 [2024-05-15 11:47:15.288019] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.669 [2024-05-15 11:47:15.288027] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.669 [2024-05-15 11:47:15.298308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.669 qpair failed and we were unable to recover it. 00:23:44.669 [2024-05-15 11:47:15.307901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.669 [2024-05-15 11:47:15.307937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.669 [2024-05-15 11:47:15.307952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.669 [2024-05-15 11:47:15.307961] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.669 [2024-05-15 11:47:15.307973] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.669 [2024-05-15 11:47:15.318160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.669 qpair failed and we were unable to recover it. 00:23:44.669 [2024-05-15 11:47:15.327988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.669 [2024-05-15 11:47:15.328026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.669 [2024-05-15 11:47:15.328042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.669 [2024-05-15 11:47:15.328052] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.669 [2024-05-15 11:47:15.328072] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.669 [2024-05-15 11:47:15.338298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.669 qpair failed and we were unable to recover it. 00:23:44.669 [2024-05-15 11:47:15.348050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.669 [2024-05-15 11:47:15.348094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.669 [2024-05-15 11:47:15.348110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.669 [2024-05-15 11:47:15.348119] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.669 [2024-05-15 11:47:15.348128] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.669 [2024-05-15 11:47:15.358478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.669 qpair failed and we were unable to recover it. 00:23:44.669 [2024-05-15 11:47:15.368044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.669 [2024-05-15 11:47:15.368090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.669 [2024-05-15 11:47:15.368106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.669 [2024-05-15 11:47:15.368116] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.669 [2024-05-15 11:47:15.368125] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.669 [2024-05-15 11:47:15.378321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.669 qpair failed and we were unable to recover it. 00:23:44.669 [2024-05-15 11:47:15.388268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.669 [2024-05-15 11:47:15.388310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.669 [2024-05-15 11:47:15.388326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.669 [2024-05-15 11:47:15.388335] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.669 [2024-05-15 11:47:15.388344] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:44.669 [2024-05-15 11:47:15.398453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:44.669 qpair failed and we were unable to recover it. 00:23:44.669 [2024-05-15 11:47:15.408335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.669 [2024-05-15 11:47:15.408383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.669 [2024-05-15 11:47:15.408413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.669 [2024-05-15 11:47:15.408428] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.669 [2024-05-15 11:47:15.408442] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:44.669 [2024-05-15 11:47:15.418649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:44.669 qpair failed and we were unable to recover it. 00:23:44.669 [2024-05-15 11:47:15.428266] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.669 [2024-05-15 11:47:15.428305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.669 [2024-05-15 11:47:15.428323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.669 [2024-05-15 11:47:15.428333] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.669 [2024-05-15 11:47:15.428342] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:44.929 [2024-05-15 11:47:15.438583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:44.929 qpair failed and we were unable to recover it. 00:23:44.929 [2024-05-15 11:47:15.448489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.929 [2024-05-15 11:47:15.448529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.929 [2024-05-15 11:47:15.448547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.929 [2024-05-15 11:47:15.448557] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.929 [2024-05-15 11:47:15.448567] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:44.929 [2024-05-15 11:47:15.458688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:44.929 qpair failed and we were unable to recover it. 00:23:44.929 [2024-05-15 11:47:15.468284] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.929 [2024-05-15 11:47:15.468324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.929 [2024-05-15 11:47:15.468342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.929 [2024-05-15 11:47:15.468352] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.929 [2024-05-15 11:47:15.468361] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:44.929 [2024-05-15 11:47:15.478802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:44.929 qpair failed and we were unable to recover it. 00:23:44.929 [2024-05-15 11:47:15.488491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.929 [2024-05-15 11:47:15.488531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.929 [2024-05-15 11:47:15.488549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.929 [2024-05-15 11:47:15.488565] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.929 [2024-05-15 11:47:15.488573] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:44.929 [2024-05-15 11:47:15.498780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:44.929 qpair failed and we were unable to recover it. 00:23:44.929 [2024-05-15 11:47:15.508497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.929 [2024-05-15 11:47:15.508537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.929 [2024-05-15 11:47:15.508555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.929 [2024-05-15 11:47:15.508564] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.929 [2024-05-15 11:47:15.508573] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:44.929 [2024-05-15 11:47:15.518976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:44.929 qpair failed and we were unable to recover it. 00:23:44.929 [2024-05-15 11:47:15.528544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.929 [2024-05-15 11:47:15.528586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.929 [2024-05-15 11:47:15.528603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.929 [2024-05-15 11:47:15.528613] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.929 [2024-05-15 11:47:15.528623] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:44.929 [2024-05-15 11:47:15.538962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:44.929 qpair failed and we were unable to recover it. 00:23:44.929 [2024-05-15 11:47:15.548495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.929 [2024-05-15 11:47:15.548537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.929 [2024-05-15 11:47:15.548554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.929 [2024-05-15 11:47:15.548564] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.929 [2024-05-15 11:47:15.548573] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:44.929 [2024-05-15 11:47:15.559112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:44.929 qpair failed and we were unable to recover it. 00:23:44.929 [2024-05-15 11:47:15.568584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.929 [2024-05-15 11:47:15.568624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.929 [2024-05-15 11:47:15.568641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.929 [2024-05-15 11:47:15.568651] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.929 [2024-05-15 11:47:15.568660] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:44.929 [2024-05-15 11:47:15.578968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:44.929 qpair failed and we were unable to recover it. 00:23:44.929 [2024-05-15 11:47:15.588714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.929 [2024-05-15 11:47:15.588754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.929 [2024-05-15 11:47:15.588772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.929 [2024-05-15 11:47:15.588782] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.929 [2024-05-15 11:47:15.588791] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:44.929 [2024-05-15 11:47:15.599084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:44.929 qpair failed and we were unable to recover it. 00:23:44.929 [2024-05-15 11:47:15.608734] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.929 [2024-05-15 11:47:15.608780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.929 [2024-05-15 11:47:15.608798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.929 [2024-05-15 11:47:15.608808] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.929 [2024-05-15 11:47:15.608817] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:44.929 [2024-05-15 11:47:15.618988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:44.929 qpair failed and we were unable to recover it. 00:23:44.929 [2024-05-15 11:47:15.628719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.929 [2024-05-15 11:47:15.628754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.929 [2024-05-15 11:47:15.628771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.929 [2024-05-15 11:47:15.628781] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.929 [2024-05-15 11:47:15.628789] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:44.929 [2024-05-15 11:47:15.639474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:44.929 qpair failed and we were unable to recover it. 00:23:44.929 [2024-05-15 11:47:15.648800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.929 [2024-05-15 11:47:15.648835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.929 [2024-05-15 11:47:15.648853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.929 [2024-05-15 11:47:15.648863] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.929 [2024-05-15 11:47:15.648871] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:44.929 [2024-05-15 11:47:15.659204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:44.929 qpair failed and we were unable to recover it. 00:23:44.929 [2024-05-15 11:47:15.668913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.929 [2024-05-15 11:47:15.668953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.929 [2024-05-15 11:47:15.668974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.929 [2024-05-15 11:47:15.668983] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.929 [2024-05-15 11:47:15.668992] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:44.929 [2024-05-15 11:47:15.679394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:44.929 qpair failed and we were unable to recover it. 00:23:44.929 [2024-05-15 11:47:15.688936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:44.929 [2024-05-15 11:47:15.688977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:44.929 [2024-05-15 11:47:15.688994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:44.929 [2024-05-15 11:47:15.689003] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:44.929 [2024-05-15 11:47:15.689012] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:45.189 [2024-05-15 11:47:15.699321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:45.189 qpair failed and we were unable to recover it. 00:23:45.189 [2024-05-15 11:47:15.709069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:45.189 [2024-05-15 11:47:15.709106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:45.189 [2024-05-15 11:47:15.709124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:45.189 [2024-05-15 11:47:15.709134] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:45.189 [2024-05-15 11:47:15.709143] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:45.189 [2024-05-15 11:47:15.719242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:45.189 qpair failed and we were unable to recover it. 00:23:45.189 [2024-05-15 11:47:15.729149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:45.189 [2024-05-15 11:47:15.729183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:45.189 [2024-05-15 11:47:15.729200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:45.189 [2024-05-15 11:47:15.729209] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:45.189 [2024-05-15 11:47:15.729218] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:45.189 [2024-05-15 11:47:15.739482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:45.189 qpair failed and we were unable to recover it. 00:23:45.189 [2024-05-15 11:47:15.749147] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:45.189 [2024-05-15 11:47:15.749188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:45.189 [2024-05-15 11:47:15.749205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:45.189 [2024-05-15 11:47:15.749214] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:45.189 [2024-05-15 11:47:15.749227] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:23:45.189 [2024-05-15 11:47:15.759539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:45.189 qpair failed and we were unable to recover it. 00:23:45.189 [2024-05-15 11:47:15.759696] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:23:45.189 A controller has encountered a failure and is being reset. 00:23:45.189 [2024-05-15 11:47:15.759822] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:23:45.189 [2024-05-15 11:47:15.761786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:45.189 Controller properly reset. 00:23:46.126 Write completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Write completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Write completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Read completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Write completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Read completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Read completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Write completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Read completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Read completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Read completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Write completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Write completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Write completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Write completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Write completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Write completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Write completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Write completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.126 Write completed with error (sct=0, sc=8) 00:23:46.126 starting I/O failed 00:23:46.127 Read completed with error (sct=0, sc=8) 00:23:46.127 starting I/O failed 00:23:46.127 Write completed with error (sct=0, sc=8) 00:23:46.127 starting I/O failed 00:23:46.127 Write completed with error (sct=0, sc=8) 00:23:46.127 starting I/O failed 00:23:46.127 Read completed with error (sct=0, sc=8) 00:23:46.127 starting I/O failed 00:23:46.127 Read completed with error (sct=0, sc=8) 00:23:46.127 starting I/O failed 00:23:46.127 Read completed with error (sct=0, sc=8) 00:23:46.127 starting I/O failed 00:23:46.127 Read completed with error (sct=0, sc=8) 00:23:46.127 starting I/O failed 00:23:46.127 Read completed with error (sct=0, sc=8) 00:23:46.127 starting I/O failed 00:23:46.127 Read completed with error (sct=0, sc=8) 00:23:46.127 starting I/O failed 00:23:46.127 Read completed with error (sct=0, sc=8) 00:23:46.127 starting I/O failed 00:23:46.127 Read completed with error (sct=0, sc=8) 00:23:46.127 starting I/O failed 00:23:46.127 Write completed with error (sct=0, sc=8) 00:23:46.127 starting I/O failed 00:23:46.127 [2024-05-15 11:47:16.775223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:46.127 Initializing NVMe Controllers 00:23:46.127 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:46.127 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:46.127 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:23:46.127 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:23:46.127 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:23:46.127 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:23:46.127 Initialization complete. Launching workers. 00:23:46.127 Starting thread on core 1 00:23:46.127 Starting thread on core 2 00:23:46.127 Starting thread on core 3 00:23:46.127 Starting thread on core 0 00:23:46.127 11:47:16 -- host/target_disconnect.sh@59 -- # sync 00:23:46.127 00:23:46.127 real 0m12.473s 00:23:46.127 user 0m26.896s 00:23:46.127 sys 0m3.300s 00:23:46.127 11:47:16 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:46.127 11:47:16 -- common/autotest_common.sh@10 -- # set +x 00:23:46.127 ************************************ 00:23:46.127 END TEST nvmf_target_disconnect_tc2 00:23:46.127 ************************************ 00:23:46.385 11:47:16 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:23:46.385 11:47:16 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:23:46.385 11:47:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:46.385 11:47:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:46.385 11:47:16 -- common/autotest_common.sh@10 -- # set +x 00:23:46.385 ************************************ 00:23:46.385 START TEST nvmf_target_disconnect_tc3 00:23:46.385 ************************************ 00:23:46.385 11:47:16 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc3 00:23:46.385 11:47:16 -- host/target_disconnect.sh@65 -- # reconnectpid=3128864 00:23:46.385 11:47:16 -- host/target_disconnect.sh@67 -- # sleep 2 00:23:46.385 11:47:16 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:23:46.385 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.288 11:47:18 -- host/target_disconnect.sh@68 -- # kill -9 3127894 00:23:48.288 11:47:18 -- host/target_disconnect.sh@70 -- # sleep 2 00:23:49.665 Write completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Write completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Read completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Write completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Write completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Read completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Read completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Read completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Write completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Write completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Write completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Read completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Read completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Read completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Write completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Read completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Read completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Read completed with error (sct=0, sc=8) 00:23:49.665 starting I/O failed 00:23:49.665 Write completed with error (sct=0, sc=8) 00:23:49.666 starting I/O failed 00:23:49.666 Read completed with error (sct=0, sc=8) 00:23:49.666 starting I/O failed 00:23:49.666 Read completed with error (sct=0, sc=8) 00:23:49.666 starting I/O failed 00:23:49.666 Write completed with error (sct=0, sc=8) 00:23:49.666 starting I/O failed 00:23:49.666 Read completed with error (sct=0, sc=8) 00:23:49.666 starting I/O failed 00:23:49.666 Read completed with error (sct=0, sc=8) 00:23:49.666 starting I/O failed 00:23:49.666 Read completed with error (sct=0, sc=8) 00:23:49.666 starting I/O failed 00:23:49.666 Read completed with error (sct=0, sc=8) 00:23:49.666 starting I/O failed 00:23:49.666 Read completed with error (sct=0, sc=8) 00:23:49.666 starting I/O failed 00:23:49.666 Write completed with error (sct=0, sc=8) 00:23:49.666 starting I/O failed 00:23:49.666 Read completed with error (sct=0, sc=8) 00:23:49.666 starting I/O failed 00:23:49.666 Write completed with error (sct=0, sc=8) 00:23:49.666 starting I/O failed 00:23:49.666 Write completed with error (sct=0, sc=8) 00:23:49.666 starting I/O failed 00:23:49.666 Write completed with error (sct=0, sc=8) 00:23:49.666 starting I/O failed 00:23:49.666 [2024-05-15 11:47:20.119405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.233 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 3127894 Killed "${NVMF_APP[@]}" "$@" 00:23:50.233 11:47:20 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:23:50.233 11:47:20 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:23:50.233 11:47:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:50.233 11:47:20 -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:50.233 11:47:20 -- common/autotest_common.sh@10 -- # set +x 00:23:50.233 11:47:20 -- nvmf/common.sh@470 -- # nvmfpid=3129410 00:23:50.233 11:47:20 -- nvmf/common.sh@471 -- # waitforlisten 3129410 00:23:50.233 11:47:20 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:23:50.233 11:47:20 -- common/autotest_common.sh@827 -- # '[' -z 3129410 ']' 00:23:50.233 11:47:20 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.233 11:47:20 -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:50.233 11:47:20 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.233 11:47:20 -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:50.234 11:47:20 -- common/autotest_common.sh@10 -- # set +x 00:23:50.493 [2024-05-15 11:47:21.007099] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:23:50.493 [2024-05-15 11:47:21.007161] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.493 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.493 [2024-05-15 11:47:21.092328] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:50.493 Read completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Read completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Read completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Read completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Read completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Read completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Read completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Read completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Read completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Read completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Read completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Read completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Read completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 Write completed with error (sct=0, sc=8) 00:23:50.493 starting I/O failed 00:23:50.493 [2024-05-15 11:47:21.124465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.493 [2024-05-15 11:47:21.125923] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:50.493 [2024-05-15 11:47:21.125941] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:50.493 [2024-05-15 11:47:21.125951] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:23:50.493 [2024-05-15 11:47:21.179447] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.493 [2024-05-15 11:47:21.179490] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.493 [2024-05-15 11:47:21.179500] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.493 [2024-05-15 11:47:21.179508] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.493 [2024-05-15 11:47:21.179516] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.493 [2024-05-15 11:47:21.179642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:50.493 [2024-05-15 11:47:21.179744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:50.493 [2024-05-15 11:47:21.179845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:50.493 [2024-05-15 11:47:21.179846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:23:51.431 11:47:21 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:51.431 11:47:21 -- common/autotest_common.sh@860 -- # return 0 00:23:51.431 11:47:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:51.431 11:47:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.431 11:47:21 -- common/autotest_common.sh@10 -- # set +x 00:23:51.431 11:47:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.431 11:47:21 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:51.431 11:47:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.431 11:47:21 -- common/autotest_common.sh@10 -- # set +x 00:23:51.431 Malloc0 00:23:51.431 11:47:21 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.431 11:47:21 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:51.431 11:47:21 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.431 11:47:21 -- common/autotest_common.sh@10 -- # set +x 00:23:51.431 [2024-05-15 11:47:21.923958] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2366ff0/0x2372c80) succeed. 00:23:51.431 [2024-05-15 11:47:21.934860] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2368630/0x23b4310) succeed. 00:23:51.431 11:47:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.431 11:47:22 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:51.431 11:47:22 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.431 11:47:22 -- common/autotest_common.sh@10 -- # set +x 00:23:51.431 11:47:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.431 11:47:22 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:51.431 11:47:22 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.431 11:47:22 -- common/autotest_common.sh@10 -- # set +x 00:23:51.431 11:47:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.431 11:47:22 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:23:51.431 11:47:22 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.431 11:47:22 -- common/autotest_common.sh@10 -- # set +x 00:23:51.431 [2024-05-15 11:47:22.087807] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:51.431 [2024-05-15 11:47:22.088202] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:23:51.431 11:47:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.431 11:47:22 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:23:51.431 11:47:22 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.431 11:47:22 -- common/autotest_common.sh@10 -- # set +x 00:23:51.431 11:47:22 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.431 11:47:22 -- host/target_disconnect.sh@73 -- # wait 3128864 00:23:51.431 [2024-05-15 11:47:22.129815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:51.431 qpair failed and we were unable to recover it. 00:23:51.431 [2024-05-15 11:47:22.131252] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:51.431 [2024-05-15 11:47:22.131271] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:51.431 [2024-05-15 11:47:22.131280] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:23:52.808 [2024-05-15 11:47:23.135266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:52.808 qpair failed and we were unable to recover it. 00:23:52.808 [2024-05-15 11:47:23.136693] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:52.808 [2024-05-15 11:47:23.136715] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:52.808 [2024-05-15 11:47:23.136723] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:23:53.744 [2024-05-15 11:47:24.140569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:53.744 qpair failed and we were unable to recover it. 00:23:53.744 [2024-05-15 11:47:24.141966] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:53.744 [2024-05-15 11:47:24.141983] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:53.744 [2024-05-15 11:47:24.141991] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:23:54.682 [2024-05-15 11:47:25.145775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:54.682 qpair failed and we were unable to recover it. 00:23:54.682 [2024-05-15 11:47:25.147232] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:54.682 [2024-05-15 11:47:25.147248] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:54.682 [2024-05-15 11:47:25.147257] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:23:55.764 [2024-05-15 11:47:26.151109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:55.764 qpair failed and we were unable to recover it. 00:23:55.764 [2024-05-15 11:47:26.152411] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:55.764 [2024-05-15 11:47:26.152427] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:55.764 [2024-05-15 11:47:26.152435] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:23:56.716 [2024-05-15 11:47:27.156307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:56.716 qpair failed and we were unable to recover it. 00:23:56.716 [2024-05-15 11:47:27.157635] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:56.716 [2024-05-15 11:47:27.157652] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:56.716 [2024-05-15 11:47:27.157661] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:23:57.653 [2024-05-15 11:47:28.161419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.653 qpair failed and we were unable to recover it. 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Write completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Write completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Write completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Write completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Write completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Write completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Write completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Write completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Write completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Write completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Write completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Write completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Read completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 Write completed with error (sct=0, sc=8) 00:23:58.591 starting I/O failed 00:23:58.591 [2024-05-15 11:47:29.166391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.591 [2024-05-15 11:47:29.167662] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:58.591 [2024-05-15 11:47:29.167680] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:58.591 [2024-05-15 11:47:29.167689] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:23:59.527 [2024-05-15 11:47:30.171533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:59.527 qpair failed and we were unable to recover it. 00:23:59.527 [2024-05-15 11:47:30.172835] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:59.527 [2024-05-15 11:47:30.172852] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:59.527 [2024-05-15 11:47:30.172860] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:00.465 [2024-05-15 11:47:31.176821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.465 qpair failed and we were unable to recover it. 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Write completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Write completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Write completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Write completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Write completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Write completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Write completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Write completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Write completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Write completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Write completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 Read completed with error (sct=0, sc=8) 00:24:01.842 starting I/O failed 00:24:01.842 [2024-05-15 11:47:32.181856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:01.842 [2024-05-15 11:47:32.181883] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:01.842 A controller has encountered a failure and is being reset. 00:24:01.842 Resorting to new failover address 192.168.100.9 00:24:01.842 [2024-05-15 11:47:32.181934] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.842 [2024-05-15 11:47:32.181969] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:01.842 [2024-05-15 11:47:32.215201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.842 Controller properly reset. 00:24:01.842 Initializing NVMe Controllers 00:24:01.842 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:01.842 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:01.842 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:01.842 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:01.842 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:01.842 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:01.842 Initialization complete. Launching workers. 00:24:01.842 Starting thread on core 1 00:24:01.842 Starting thread on core 2 00:24:01.842 Starting thread on core 3 00:24:01.842 Starting thread on core 0 00:24:01.842 11:47:32 -- host/target_disconnect.sh@74 -- # sync 00:24:01.842 00:24:01.842 real 0m15.359s 00:24:01.842 user 0m52.563s 00:24:01.842 sys 0m5.096s 00:24:01.842 11:47:32 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:01.842 11:47:32 -- common/autotest_common.sh@10 -- # set +x 00:24:01.842 ************************************ 00:24:01.842 END TEST nvmf_target_disconnect_tc3 00:24:01.842 ************************************ 00:24:01.842 11:47:32 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:01.842 11:47:32 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:24:01.842 11:47:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:01.842 11:47:32 -- nvmf/common.sh@117 -- # sync 00:24:01.842 11:47:32 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:01.842 11:47:32 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:01.842 11:47:32 -- nvmf/common.sh@120 -- # set +e 00:24:01.842 11:47:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:01.842 11:47:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:01.842 rmmod nvme_rdma 00:24:01.842 rmmod nvme_fabrics 00:24:01.842 11:47:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.842 11:47:32 -- nvmf/common.sh@124 -- # set -e 00:24:01.842 11:47:32 -- nvmf/common.sh@125 -- # return 0 00:24:01.842 11:47:32 -- nvmf/common.sh@478 -- # '[' -n 3129410 ']' 00:24:01.842 11:47:32 -- nvmf/common.sh@479 -- # killprocess 3129410 00:24:01.842 11:47:32 -- common/autotest_common.sh@946 -- # '[' -z 3129410 ']' 00:24:01.842 11:47:32 -- common/autotest_common.sh@950 -- # kill -0 3129410 00:24:01.842 11:47:32 -- common/autotest_common.sh@951 -- # uname 00:24:01.842 11:47:32 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:01.842 11:47:32 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3129410 00:24:01.842 11:47:32 -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:24:01.842 11:47:32 -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:24:01.842 11:47:32 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3129410' 00:24:01.842 killing process with pid 3129410 00:24:01.842 11:47:32 -- common/autotest_common.sh@965 -- # kill 3129410 00:24:01.842 [2024-05-15 11:47:32.453012] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:01.842 11:47:32 -- common/autotest_common.sh@970 -- # wait 3129410 00:24:01.842 [2024-05-15 11:47:32.546754] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:24:02.101 11:47:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:02.101 11:47:32 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:24:02.101 00:24:02.101 real 0m35.309s 00:24:02.101 user 2m7.806s 00:24:02.101 sys 0m13.378s 00:24:02.101 11:47:32 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:02.101 11:47:32 -- common/autotest_common.sh@10 -- # set +x 00:24:02.101 ************************************ 00:24:02.101 END TEST nvmf_target_disconnect 00:24:02.101 ************************************ 00:24:02.101 11:47:32 -- nvmf/nvmf.sh@124 -- # timing_exit host 00:24:02.101 11:47:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:02.101 11:47:32 -- common/autotest_common.sh@10 -- # set +x 00:24:02.360 11:47:32 -- nvmf/nvmf.sh@126 -- # trap - SIGINT SIGTERM EXIT 00:24:02.360 00:24:02.360 real 16m51.572s 00:24:02.360 user 43m19.035s 00:24:02.360 sys 4m48.490s 00:24:02.360 11:47:32 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:02.360 11:47:32 -- common/autotest_common.sh@10 -- # set +x 00:24:02.360 ************************************ 00:24:02.360 END TEST nvmf_rdma 00:24:02.360 ************************************ 00:24:02.360 11:47:32 -- spdk/autotest.sh@283 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:02.360 11:47:32 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:02.360 11:47:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:02.360 11:47:32 -- common/autotest_common.sh@10 -- # set +x 00:24:02.360 ************************************ 00:24:02.360 START TEST spdkcli_nvmf_rdma 00:24:02.360 ************************************ 00:24:02.360 11:47:32 -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:02.360 * Looking for test storage... 00:24:02.360 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:24:02.360 11:47:33 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:24:02.360 11:47:33 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:02.360 11:47:33 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:24:02.360 11:47:33 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.360 11:47:33 -- nvmf/common.sh@7 -- # uname -s 00:24:02.360 11:47:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.360 11:47:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.360 11:47:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.360 11:47:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.360 11:47:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.360 11:47:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.360 11:47:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.361 11:47:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.361 11:47:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.361 11:47:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.361 11:47:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:02.361 11:47:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:24:02.361 11:47:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.361 11:47:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.361 11:47:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.361 11:47:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.361 11:47:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:02.361 11:47:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.361 11:47:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.361 11:47:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.361 11:47:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.361 11:47:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.361 11:47:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.361 11:47:33 -- paths/export.sh@5 -- # export PATH 00:24:02.361 11:47:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.361 11:47:33 -- nvmf/common.sh@47 -- # : 0 00:24:02.361 11:47:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:02.361 11:47:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:02.361 11:47:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.361 11:47:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.361 11:47:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.361 11:47:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:02.361 11:47:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:02.361 11:47:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:02.361 11:47:33 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:02.361 11:47:33 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:02.361 11:47:33 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:02.361 11:47:33 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:02.361 11:47:33 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:02.361 11:47:33 -- common/autotest_common.sh@10 -- # set +x 00:24:02.361 11:47:33 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:02.618 11:47:33 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:02.618 11:47:33 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3130989 00:24:02.618 11:47:33 -- spdkcli/common.sh@34 -- # waitforlisten 3130989 00:24:02.618 11:47:33 -- common/autotest_common.sh@827 -- # '[' -z 3130989 ']' 00:24:02.618 11:47:33 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.618 11:47:33 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:02.618 11:47:33 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.618 11:47:33 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:02.618 11:47:33 -- common/autotest_common.sh@10 -- # set +x 00:24:02.618 [2024-05-15 11:47:33.154199] Starting SPDK v24.05-pre git sha1 913aa023f / DPDK 23.11.0 initialization... 00:24:02.618 [2024-05-15 11:47:33.154259] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3130989 ] 00:24:02.618 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.618 [2024-05-15 11:47:33.226865] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:02.618 [2024-05-15 11:47:33.317934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.618 [2024-05-15 11:47:33.317937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.553 11:47:33 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:03.553 11:47:33 -- common/autotest_common.sh@860 -- # return 0 00:24:03.553 11:47:33 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:03.553 11:47:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.553 11:47:33 -- common/autotest_common.sh@10 -- # set +x 00:24:03.553 11:47:34 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:03.553 11:47:34 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:24:03.553 11:47:34 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:24:03.553 11:47:34 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:24:03.553 11:47:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.553 11:47:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:03.553 11:47:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:03.553 11:47:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:03.553 11:47:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.553 11:47:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:03.553 11:47:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.553 11:47:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:03.553 11:47:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:03.554 11:47:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:03.554 11:47:34 -- common/autotest_common.sh@10 -- # set +x 00:24:10.123 11:47:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:10.123 11:47:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:10.123 11:47:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:10.123 11:47:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:10.123 11:47:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:10.123 11:47:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:10.123 11:47:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:10.123 11:47:40 -- nvmf/common.sh@295 -- # net_devs=() 00:24:10.123 11:47:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:10.123 11:47:40 -- nvmf/common.sh@296 -- # e810=() 00:24:10.123 11:47:40 -- nvmf/common.sh@296 -- # local -ga e810 00:24:10.123 11:47:40 -- nvmf/common.sh@297 -- # x722=() 00:24:10.123 11:47:40 -- nvmf/common.sh@297 -- # local -ga x722 00:24:10.123 11:47:40 -- nvmf/common.sh@298 -- # mlx=() 00:24:10.123 11:47:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:10.123 11:47:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.123 11:47:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.123 11:47:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.123 11:47:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.123 11:47:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.123 11:47:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.123 11:47:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.123 11:47:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.123 11:47:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.123 11:47:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.123 11:47:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.123 11:47:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:10.123 11:47:40 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:10.123 11:47:40 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:10.123 11:47:40 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:10.123 11:47:40 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:10.123 11:47:40 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:10.123 11:47:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:10.123 11:47:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.123 11:47:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:24:10.123 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:24:10.123 11:47:40 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:10.123 11:47:40 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:10.123 11:47:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:10.123 11:47:40 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:10.123 11:47:40 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:10.123 11:47:40 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:10.123 11:47:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.123 11:47:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:24:10.123 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:24:10.123 11:47:40 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:10.123 11:47:40 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:10.123 11:47:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:10.123 11:47:40 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:10.123 11:47:40 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:10.123 11:47:40 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:10.123 11:47:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:10.123 11:47:40 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:10.123 11:47:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.123 11:47:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.123 11:47:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:10.123 11:47:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.123 11:47:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:24:10.123 Found net devices under 0000:18:00.0: mlx_0_0 00:24:10.123 11:47:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.123 11:47:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.123 11:47:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.123 11:47:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:10.123 11:47:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.123 11:47:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:24:10.123 Found net devices under 0000:18:00.1: mlx_0_1 00:24:10.123 11:47:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.123 11:47:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:10.123 11:47:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:10.123 11:47:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:10.123 11:47:40 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:24:10.123 11:47:40 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:24:10.123 11:47:40 -- nvmf/common.sh@409 -- # rdma_device_init 00:24:10.123 11:47:40 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:24:10.123 11:47:40 -- nvmf/common.sh@58 -- # uname 00:24:10.123 11:47:40 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:10.123 11:47:40 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:10.123 11:47:40 -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:10.123 11:47:40 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:10.123 11:47:40 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:10.123 11:47:40 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:10.123 11:47:40 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:10.123 11:47:40 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:10.123 11:47:40 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:24:10.123 11:47:40 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:10.123 11:47:40 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:10.123 11:47:40 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:10.123 11:47:40 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:10.123 11:47:40 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:10.123 11:47:40 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:10.124 11:47:40 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:10.124 11:47:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:10.124 11:47:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.124 11:47:40 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:10.124 11:47:40 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:10.124 11:47:40 -- nvmf/common.sh@105 -- # continue 2 00:24:10.124 11:47:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:10.124 11:47:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.124 11:47:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:10.124 11:47:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.124 11:47:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:10.124 11:47:40 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:10.124 11:47:40 -- nvmf/common.sh@105 -- # continue 2 00:24:10.124 11:47:40 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:10.124 11:47:40 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:10.124 11:47:40 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:10.124 11:47:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:10.124 11:47:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:10.124 11:47:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:10.124 11:47:40 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:10.124 11:47:40 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:10.124 11:47:40 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:10.124 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:10.124 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:24:10.124 altname enp24s0f0np0 00:24:10.124 altname ens785f0np0 00:24:10.124 inet 192.168.100.8/24 scope global mlx_0_0 00:24:10.124 valid_lft forever preferred_lft forever 00:24:10.124 11:47:40 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:10.124 11:47:40 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:10.124 11:47:40 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:10.124 11:47:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:10.124 11:47:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:10.124 11:47:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:10.124 11:47:40 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:10.124 11:47:40 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:10.124 11:47:40 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:10.124 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:10.124 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:24:10.124 altname enp24s0f1np1 00:24:10.124 altname ens785f1np1 00:24:10.124 inet 192.168.100.9/24 scope global mlx_0_1 00:24:10.124 valid_lft forever preferred_lft forever 00:24:10.124 11:47:40 -- nvmf/common.sh@411 -- # return 0 00:24:10.124 11:47:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:10.124 11:47:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:10.124 11:47:40 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:24:10.124 11:47:40 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:24:10.124 11:47:40 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:10.124 11:47:40 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:10.124 11:47:40 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:10.124 11:47:40 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:10.124 11:47:40 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:10.124 11:47:40 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:10.124 11:47:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:10.124 11:47:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.124 11:47:40 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:10.124 11:47:40 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:10.124 11:47:40 -- nvmf/common.sh@105 -- # continue 2 00:24:10.124 11:47:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:10.124 11:47:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.124 11:47:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:10.124 11:47:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.124 11:47:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:10.124 11:47:40 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:10.124 11:47:40 -- nvmf/common.sh@105 -- # continue 2 00:24:10.124 11:47:40 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:10.124 11:47:40 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:10.124 11:47:40 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:10.124 11:47:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:10.124 11:47:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:10.124 11:47:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:10.124 11:47:40 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:10.124 11:47:40 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:10.124 11:47:40 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:10.124 11:47:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:10.124 11:47:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:10.124 11:47:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:10.124 11:47:40 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:24:10.124 192.168.100.9' 00:24:10.124 11:47:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:10.124 192.168.100.9' 00:24:10.124 11:47:40 -- nvmf/common.sh@446 -- # head -n 1 00:24:10.124 11:47:40 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:10.124 11:47:40 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:24:10.124 192.168.100.9' 00:24:10.124 11:47:40 -- nvmf/common.sh@447 -- # tail -n +2 00:24:10.124 11:47:40 -- nvmf/common.sh@447 -- # head -n 1 00:24:10.124 11:47:40 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:10.124 11:47:40 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:24:10.124 11:47:40 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:10.124 11:47:40 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:24:10.124 11:47:40 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:24:10.124 11:47:40 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:24:10.124 11:47:40 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:24:10.124 11:47:40 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:10.124 11:47:40 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:10.124 11:47:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.124 11:47:40 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:10.124 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:10.124 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:10.124 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:10.124 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:10.124 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:10.124 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:10.125 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:10.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:10.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:10.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:10.125 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:10.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:10.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:10.125 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:10.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:10.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:10.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:10.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:10.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:10.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:10.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:10.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:10.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:24:10.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:10.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:10.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:10.125 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:10.125 ' 00:24:12.682 [2024-05-15 11:47:43.019359] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ffcbb0/0x1e83940) succeed. 00:24:12.682 [2024-05-15 11:47:43.031630] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ffe100/0x1f6ea00) succeed. 00:24:13.618 [2024-05-15 11:47:44.291388] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:13.618 [2024-05-15 11:47:44.291805] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:24:16.152 [2024-05-15 11:47:46.518766] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:24:18.058 [2024-05-15 11:47:48.433095] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:24:19.437 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:19.437 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:19.437 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:19.437 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:19.437 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:19.437 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:19.437 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:19.437 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:19.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:19.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:19.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:19.437 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:19.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:19.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:19.437 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:19.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:19.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:19.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:24:19.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:19.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:19.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:19.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:19.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:24:19.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:24:19.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:19.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:19.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:19.437 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:19.437 11:47:50 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:19.437 11:47:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.437 11:47:50 -- common/autotest_common.sh@10 -- # set +x 00:24:19.437 11:47:50 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:19.437 11:47:50 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:19.437 11:47:50 -- common/autotest_common.sh@10 -- # set +x 00:24:19.437 11:47:50 -- spdkcli/nvmf.sh@69 -- # check_match 00:24:19.437 11:47:50 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:24:19.696 11:47:50 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:19.955 11:47:50 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:19.955 11:47:50 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:19.955 11:47:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.955 11:47:50 -- common/autotest_common.sh@10 -- # set +x 00:24:19.955 11:47:50 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:19.955 11:47:50 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:19.955 11:47:50 -- common/autotest_common.sh@10 -- # set +x 00:24:19.955 11:47:50 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:19.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:19.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:19.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:19.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:24:19.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:24:19.955 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:19.955 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:19.955 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:19.955 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:19.955 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:19.955 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:19.955 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:19.955 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:19.955 ' 00:24:25.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:24:25.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:24:25.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:25.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:24:25.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:24:25.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:24:25.232 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:24:25.232 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:25.232 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:24:25.232 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:24:25.232 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:24:25.232 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:24:25.232 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:24:25.232 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:24:25.232 11:47:55 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:24:25.232 11:47:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:25.232 11:47:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.232 11:47:55 -- spdkcli/nvmf.sh@90 -- # killprocess 3130989 00:24:25.232 11:47:55 -- common/autotest_common.sh@946 -- # '[' -z 3130989 ']' 00:24:25.232 11:47:55 -- common/autotest_common.sh@950 -- # kill -0 3130989 00:24:25.232 11:47:55 -- common/autotest_common.sh@951 -- # uname 00:24:25.232 11:47:55 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:25.232 11:47:55 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3130989 00:24:25.232 11:47:55 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:25.232 11:47:55 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:25.232 11:47:55 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3130989' 00:24:25.232 killing process with pid 3130989 00:24:25.232 11:47:55 -- common/autotest_common.sh@965 -- # kill 3130989 00:24:25.232 [2024-05-15 11:47:55.775520] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:25.232 11:47:55 -- common/autotest_common.sh@970 -- # wait 3130989 00:24:25.232 [2024-05-15 11:47:55.831168] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:24:25.492 11:47:56 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:24:25.492 11:47:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:25.492 11:47:56 -- nvmf/common.sh@117 -- # sync 00:24:25.492 11:47:56 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:25.492 11:47:56 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:25.492 11:47:56 -- nvmf/common.sh@120 -- # set +e 00:24:25.492 11:47:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:25.492 11:47:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:25.492 rmmod nvme_rdma 00:24:25.492 rmmod nvme_fabrics 00:24:25.492 11:47:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:25.492 11:47:56 -- nvmf/common.sh@124 -- # set -e 00:24:25.492 11:47:56 -- nvmf/common.sh@125 -- # return 0 00:24:25.492 11:47:56 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:24:25.492 11:47:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:25.492 11:47:56 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:24:25.492 00:24:25.492 real 0m23.145s 00:24:25.492 user 0m49.369s 00:24:25.492 sys 0m6.029s 00:24:25.492 11:47:56 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:25.492 11:47:56 -- common/autotest_common.sh@10 -- # set +x 00:24:25.492 ************************************ 00:24:25.492 END TEST spdkcli_nvmf_rdma 00:24:25.492 ************************************ 00:24:25.492 11:47:56 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:24:25.492 11:47:56 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:24:25.492 11:47:56 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:24:25.492 11:47:56 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:25.492 11:47:56 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:24:25.492 11:47:56 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:25.492 11:47:56 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:24:25.492 11:47:56 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:24:25.492 11:47:56 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:24:25.492 11:47:56 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:25.492 11:47:56 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:24:25.492 11:47:56 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:24:25.492 11:47:56 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:24:25.492 11:47:56 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:24:25.492 11:47:56 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:24:25.492 11:47:56 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:24:25.492 11:47:56 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:24:25.492 11:47:56 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:25.492 11:47:56 -- common/autotest_common.sh@10 -- # set +x 00:24:25.492 11:47:56 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:24:25.492 11:47:56 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:24:25.492 11:47:56 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:24:25.492 11:47:56 -- common/autotest_common.sh@10 -- # set +x 00:24:29.759 INFO: APP EXITING 00:24:29.759 INFO: killing all VMs 00:24:29.759 INFO: killing vhost app 00:24:29.759 INFO: EXIT DONE 00:24:32.293 Waiting for block devices as requested 00:24:32.551 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:24:32.551 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:32.551 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:32.810 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:32.810 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:32.810 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:33.069 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:33.069 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:33.069 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:33.069 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:33.329 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:33.329 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:33.329 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:33.588 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:33.588 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:33.588 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:33.847 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:37.135 Cleaning 00:24:37.135 Removing: /var/run/dpdk/spdk0/config 00:24:37.135 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:37.135 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:37.135 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:37.135 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:37.135 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:24:37.135 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:24:37.135 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:24:37.135 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:24:37.135 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:37.135 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:37.135 Removing: /var/run/dpdk/spdk1/config 00:24:37.135 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:37.135 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:37.135 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:37.135 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:37.135 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:24:37.135 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:24:37.135 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:24:37.135 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:24:37.135 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:37.135 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:37.135 Removing: /var/run/dpdk/spdk1/mp_socket 00:24:37.135 Removing: /var/run/dpdk/spdk2/config 00:24:37.135 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:37.135 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:37.135 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:37.135 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:37.135 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:24:37.135 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:24:37.135 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:24:37.135 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:24:37.135 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:37.135 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:37.135 Removing: /var/run/dpdk/spdk3/config 00:24:37.135 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:37.135 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:37.135 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:37.135 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:37.135 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:24:37.135 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:24:37.135 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:24:37.135 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:24:37.135 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:37.135 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:37.135 Removing: /var/run/dpdk/spdk4/config 00:24:37.135 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:37.135 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:37.135 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:37.135 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:37.135 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:24:37.135 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:24:37.135 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:24:37.135 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:24:37.135 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:37.135 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:37.135 Removing: /dev/shm/bdevperf_trace.pid2979220 00:24:37.135 Removing: /dev/shm/bdevperf_trace.pid3060540 00:24:37.135 Removing: /dev/shm/bdev_svc_trace.1 00:24:37.135 Removing: /dev/shm/nvmf_trace.0 00:24:37.135 Removing: /dev/shm/spdk_tgt_trace.pid2883807 00:24:37.135 Removing: /var/run/dpdk/spdk0 00:24:37.135 Removing: /var/run/dpdk/spdk1 00:24:37.135 Removing: /var/run/dpdk/spdk2 00:24:37.135 Removing: /var/run/dpdk/spdk3 00:24:37.135 Removing: /var/run/dpdk/spdk4 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2880501 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2882036 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2883807 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2884353 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2885108 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2885303 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2886081 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2886262 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2886550 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2891335 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2893106 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2893336 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2893705 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2894004 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2894246 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2894456 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2894656 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2894883 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2895660 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2898078 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2898383 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2898679 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2898699 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2899096 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2899279 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2899684 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2899867 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2900075 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2900261 00:24:37.135 Removing: /var/run/dpdk/spdk_pid2900469 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2900490 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2901021 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2901277 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2901593 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2902089 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2902300 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2902370 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2902571 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2902805 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2903061 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2903321 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2903548 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2903753 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2903958 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2904161 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2904366 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2904569 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2904770 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2904972 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2905178 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2905422 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2905697 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2905951 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2906162 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2906362 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2906570 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2906768 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2906929 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2907267 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2910674 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2946813 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2950456 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2958717 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2963185 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2966204 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2966792 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2979220 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2979546 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2983036 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2987808 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2990546 00:24:37.136 Removing: /var/run/dpdk/spdk_pid2999044 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3019676 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3022859 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3035834 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3058368 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3059635 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3060540 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3064114 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3070146 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3070904 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3071638 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3072410 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3072682 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3076503 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3076506 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3080356 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3080723 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3081180 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3081803 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3081815 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3086372 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3086817 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3090374 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3092552 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3097355 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3106010 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3106071 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3121870 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3122086 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3126964 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3127368 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3128864 00:24:37.136 Removing: /var/run/dpdk/spdk_pid3130989 00:24:37.136 Clean 00:24:37.395 11:48:07 -- common/autotest_common.sh@1447 -- # return 0 00:24:37.395 11:48:07 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:24:37.395 11:48:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:37.395 11:48:07 -- common/autotest_common.sh@10 -- # set +x 00:24:37.395 11:48:08 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:24:37.395 11:48:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:37.395 11:48:08 -- common/autotest_common.sh@10 -- # set +x 00:24:37.395 11:48:08 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:24:37.395 11:48:08 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:24:37.395 11:48:08 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:24:37.395 11:48:08 -- spdk/autotest.sh@389 -- # hash lcov 00:24:37.395 11:48:08 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:24:37.395 11:48:08 -- spdk/autotest.sh@391 -- # hostname 00:24:37.395 11:48:08 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-43 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:24:37.654 geninfo: WARNING: invalid characters removed from testname! 00:24:59.596 11:48:27 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:24:59.596 11:48:29 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:00.975 11:48:31 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:02.880 11:48:33 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:04.259 11:48:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:06.165 11:48:36 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:07.544 11:48:38 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:07.544 11:48:38 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:07.544 11:48:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:07.544 11:48:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.544 11:48:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.545 11:48:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.545 11:48:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.545 11:48:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.545 11:48:38 -- paths/export.sh@5 -- $ export PATH 00:25:07.545 11:48:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.545 11:48:38 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:25:07.545 11:48:38 -- common/autobuild_common.sh@437 -- $ date +%s 00:25:07.545 11:48:38 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715766518.XXXXXX 00:25:07.545 11:48:38 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715766518.z85lqm 00:25:07.545 11:48:38 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:25:07.545 11:48:38 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:25:07.545 11:48:38 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:25:07.545 11:48:38 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:25:07.545 11:48:38 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:25:07.545 11:48:38 -- common/autobuild_common.sh@453 -- $ get_config_params 00:25:07.545 11:48:38 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:25:07.545 11:48:38 -- common/autotest_common.sh@10 -- $ set +x 00:25:07.545 11:48:38 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:25:07.545 11:48:38 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:25:07.545 11:48:38 -- pm/common@17 -- $ local monitor 00:25:07.545 11:48:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:07.545 11:48:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:07.545 11:48:38 -- pm/common@21 -- $ date +%s 00:25:07.545 11:48:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:07.545 11:48:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:07.545 11:48:38 -- pm/common@21 -- $ date +%s 00:25:07.545 11:48:38 -- pm/common@25 -- $ sleep 1 00:25:07.545 11:48:38 -- pm/common@21 -- $ date +%s 00:25:07.545 11:48:38 -- pm/common@21 -- $ date +%s 00:25:07.545 11:48:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715766518 00:25:07.545 11:48:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715766518 00:25:07.545 11:48:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715766518 00:25:07.545 11:48:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715766518 00:25:07.545 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715766518_collect-vmstat.pm.log 00:25:07.545 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715766518_collect-cpu-load.pm.log 00:25:07.545 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715766518_collect-cpu-temp.pm.log 00:25:07.545 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715766518_collect-bmc-pm.bmc.pm.log 00:25:08.483 11:48:39 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:25:08.483 11:48:39 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72 00:25:08.483 11:48:39 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:08.483 11:48:39 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:08.483 11:48:39 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:08.483 11:48:39 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:08.483 11:48:39 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:08.483 11:48:39 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:08.483 11:48:39 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:25:08.483 11:48:39 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:08.483 11:48:39 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:08.483 11:48:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:08.483 11:48:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:08.483 11:48:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:08.483 11:48:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:25:08.483 11:48:39 -- pm/common@44 -- $ pid=3144300 00:25:08.483 11:48:39 -- pm/common@50 -- $ kill -TERM 3144300 00:25:08.483 11:48:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:08.483 11:48:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:25:08.483 11:48:39 -- pm/common@44 -- $ pid=3144302 00:25:08.483 11:48:39 -- pm/common@50 -- $ kill -TERM 3144302 00:25:08.483 11:48:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:08.483 11:48:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:25:08.483 11:48:39 -- pm/common@44 -- $ pid=3144304 00:25:08.483 11:48:39 -- pm/common@50 -- $ kill -TERM 3144304 00:25:08.483 11:48:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:08.483 11:48:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:25:08.483 11:48:39 -- pm/common@44 -- $ pid=3144329 00:25:08.483 11:48:39 -- pm/common@50 -- $ sudo -E kill -TERM 3144329 00:25:08.742 + [[ -n 2782644 ]] 00:25:08.742 + sudo kill 2782644 00:25:08.751 [Pipeline] } 00:25:08.769 [Pipeline] // stage 00:25:08.774 [Pipeline] } 00:25:08.793 [Pipeline] // timeout 00:25:08.799 [Pipeline] } 00:25:08.837 [Pipeline] // catchError 00:25:08.842 [Pipeline] } 00:25:08.859 [Pipeline] // wrap 00:25:08.865 [Pipeline] } 00:25:08.880 [Pipeline] // catchError 00:25:08.888 [Pipeline] stage 00:25:08.890 [Pipeline] { (Epilogue) 00:25:08.904 [Pipeline] catchError 00:25:08.905 [Pipeline] { 00:25:08.920 [Pipeline] echo 00:25:08.921 Cleanup processes 00:25:08.926 [Pipeline] sh 00:25:09.211 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:09.211 3144413 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:25:09.211 3144627 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:09.226 [Pipeline] sh 00:25:09.511 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:09.511 ++ grep -v 'sudo pgrep' 00:25:09.511 ++ awk '{print $1}' 00:25:09.511 + sudo kill -9 3144413 00:25:09.524 [Pipeline] sh 00:25:09.807 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:16.388 [Pipeline] sh 00:25:16.742 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:16.742 Artifacts sizes are good 00:25:16.756 [Pipeline] archiveArtifacts 00:25:16.763 Archiving artifacts 00:25:16.887 [Pipeline] sh 00:25:17.176 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:25:17.191 [Pipeline] cleanWs 00:25:17.200 [WS-CLEANUP] Deleting project workspace... 00:25:17.200 [WS-CLEANUP] Deferred wipeout is used... 00:25:17.206 [WS-CLEANUP] done 00:25:17.208 [Pipeline] } 00:25:17.229 [Pipeline] // catchError 00:25:17.240 [Pipeline] sh 00:25:17.522 + logger -p user.info -t JENKINS-CI 00:25:17.530 [Pipeline] } 00:25:17.546 [Pipeline] // stage 00:25:17.552 [Pipeline] } 00:25:17.568 [Pipeline] // node 00:25:17.573 [Pipeline] End of Pipeline 00:25:17.603 Finished: SUCCESS