00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 620 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3286 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.082 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.083 The recommended git tool is: git 00:00:00.083 using credential 00000000-0000-0000-0000-000000000002 00:00:00.085 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.121 Fetching changes from the remote Git repository 00:00:00.125 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.153 Using shallow fetch with depth 1 00:00:00.153 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.153 > git --version # timeout=10 00:00:00.190 > git --version # 'git version 2.39.2' 00:00:00.190 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.217 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.217 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.858 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.871 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.881 Checking out Revision 1c6ed56008363df82da0fcec030d6d5a1f7bd340 (FETCH_HEAD) 00:00:04.882 > git config core.sparsecheckout # timeout=10 00:00:04.891 > git read-tree -mu HEAD # timeout=10 00:00:04.905 > git checkout -f 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=5 00:00:04.924 Commit message: "spdk-abi-per-patch: pass revision to subbuild" 00:00:04.924 > git rev-list --no-walk 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=10 00:00:05.018 [Pipeline] Start of Pipeline 00:00:05.029 [Pipeline] library 00:00:05.030 Loading library shm_lib@master 00:00:05.030 Library shm_lib@master is cached. Copying from home. 00:00:05.046 [Pipeline] node 00:00:05.057 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.059 [Pipeline] { 00:00:05.070 [Pipeline] catchError 00:00:05.072 [Pipeline] { 00:00:05.087 [Pipeline] wrap 00:00:05.098 [Pipeline] { 00:00:05.106 [Pipeline] stage 00:00:05.108 [Pipeline] { (Prologue) 00:00:05.336 [Pipeline] sh 00:00:05.616 + logger -p user.info -t JENKINS-CI 00:00:05.636 [Pipeline] echo 00:00:05.638 Node: WFP21 00:00:05.645 [Pipeline] sh 00:00:05.943 [Pipeline] setCustomBuildProperty 00:00:05.953 [Pipeline] echo 00:00:05.955 Cleanup processes 00:00:05.960 [Pipeline] sh 00:00:06.257 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.257 2042394 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.269 [Pipeline] sh 00:00:06.549 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.549 ++ grep -v 'sudo pgrep' 00:00:06.549 ++ awk '{print $1}' 00:00:06.549 + sudo kill -9 00:00:06.549 + true 00:00:06.563 [Pipeline] cleanWs 00:00:06.573 [WS-CLEANUP] Deleting project workspace... 00:00:06.573 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.579 [WS-CLEANUP] done 00:00:06.583 [Pipeline] setCustomBuildProperty 00:00:06.596 [Pipeline] sh 00:00:06.874 + sudo git config --global --replace-all safe.directory '*' 00:00:06.936 [Pipeline] httpRequest 00:00:06.955 [Pipeline] echo 00:00:06.956 Sorcerer 10.211.164.101 is alive 00:00:06.963 [Pipeline] httpRequest 00:00:06.967 HttpMethod: GET 00:00:06.967 URL: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:06.968 Sending request to url: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:06.969 Response Code: HTTP/1.1 200 OK 00:00:06.970 Success: Status code 200 is in the accepted range: 200,404 00:00:06.970 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:07.747 [Pipeline] sh 00:00:08.029 + tar --no-same-owner -xf jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:08.046 [Pipeline] httpRequest 00:00:08.071 [Pipeline] echo 00:00:08.073 Sorcerer 10.211.164.101 is alive 00:00:08.081 [Pipeline] httpRequest 00:00:08.086 HttpMethod: GET 00:00:08.086 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:08.087 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:08.088 Response Code: HTTP/1.1 200 OK 00:00:08.089 Success: Status code 200 is in the accepted range: 200,404 00:00:08.089 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:21.983 [Pipeline] sh 00:00:22.266 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:24.814 [Pipeline] sh 00:00:25.121 + git -C spdk log --oneline -n5 00:00:25.121 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:25.121 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:25.121 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:25.121 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:25.121 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:00:25.138 [Pipeline] withCredentials 00:00:25.147 > git --version # timeout=10 00:00:25.160 > git --version # 'git version 2.39.2' 00:00:25.174 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:25.176 [Pipeline] { 00:00:25.186 [Pipeline] retry 00:00:25.188 [Pipeline] { 00:00:25.205 [Pipeline] sh 00:00:25.485 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:25.756 [Pipeline] } 00:00:25.780 [Pipeline] // retry 00:00:25.787 [Pipeline] } 00:00:25.808 [Pipeline] // withCredentials 00:00:25.818 [Pipeline] httpRequest 00:00:25.841 [Pipeline] echo 00:00:25.843 Sorcerer 10.211.164.101 is alive 00:00:25.852 [Pipeline] httpRequest 00:00:25.857 HttpMethod: GET 00:00:25.858 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:25.859 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:25.860 Response Code: HTTP/1.1 200 OK 00:00:25.860 Success: Status code 200 is in the accepted range: 200,404 00:00:25.861 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:32.512 [Pipeline] sh 00:00:32.790 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:34.172 [Pipeline] sh 00:00:34.453 + git -C dpdk log --oneline -n5 00:00:34.453 eeb0605f11 version: 23.11.0 00:00:34.453 238778122a doc: update release notes for 23.11 00:00:34.453 46aa6b3cfc doc: fix description of RSS features 00:00:34.453 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:34.453 7e421ae345 devtools: support skipping forbid rule check 00:00:34.464 [Pipeline] } 00:00:34.481 [Pipeline] // stage 00:00:34.489 [Pipeline] stage 00:00:34.491 [Pipeline] { (Prepare) 00:00:34.514 [Pipeline] writeFile 00:00:34.532 [Pipeline] sh 00:00:34.811 + logger -p user.info -t JENKINS-CI 00:00:34.823 [Pipeline] sh 00:00:35.101 + logger -p user.info -t JENKINS-CI 00:00:35.115 [Pipeline] sh 00:00:35.396 + cat autorun-spdk.conf 00:00:35.396 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.396 SPDK_TEST_NVMF=1 00:00:35.396 SPDK_TEST_NVME_CLI=1 00:00:35.396 SPDK_TEST_NVMF_NICS=mlx5 00:00:35.396 SPDK_RUN_UBSAN=1 00:00:35.396 NET_TYPE=phy 00:00:35.396 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:35.396 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:35.402 RUN_NIGHTLY=1 00:00:35.412 [Pipeline] readFile 00:00:35.449 [Pipeline] withEnv 00:00:35.452 [Pipeline] { 00:00:35.469 [Pipeline] sh 00:00:35.752 + set -ex 00:00:35.752 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:00:35.752 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:35.752 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.752 ++ SPDK_TEST_NVMF=1 00:00:35.752 ++ SPDK_TEST_NVME_CLI=1 00:00:35.752 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:35.752 ++ SPDK_RUN_UBSAN=1 00:00:35.752 ++ NET_TYPE=phy 00:00:35.752 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:35.752 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:35.752 ++ RUN_NIGHTLY=1 00:00:35.752 + case $SPDK_TEST_NVMF_NICS in 00:00:35.752 + DRIVERS=mlx5_ib 00:00:35.752 + [[ -n mlx5_ib ]] 00:00:35.752 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:35.752 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:42.312 rmmod: ERROR: Module irdma is not currently loaded 00:00:42.312 rmmod: ERROR: Module i40iw is not currently loaded 00:00:42.312 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:42.312 + true 00:00:42.312 + for D in $DRIVERS 00:00:42.312 + sudo modprobe mlx5_ib 00:00:42.312 + exit 0 00:00:42.321 [Pipeline] } 00:00:42.339 [Pipeline] // withEnv 00:00:42.344 [Pipeline] } 00:00:42.356 [Pipeline] // stage 00:00:42.364 [Pipeline] catchError 00:00:42.365 [Pipeline] { 00:00:42.379 [Pipeline] timeout 00:00:42.379 Timeout set to expire in 1 hr 0 min 00:00:42.380 [Pipeline] { 00:00:42.393 [Pipeline] stage 00:00:42.395 [Pipeline] { (Tests) 00:00:42.412 [Pipeline] sh 00:00:42.692 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:00:42.692 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:00:42.692 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:00:42.692 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:00:42.692 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:42.692 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:00:42.692 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:00:42.692 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:42.692 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:00:42.692 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:42.692 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:00:42.692 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:00:42.692 + source /etc/os-release 00:00:42.692 ++ NAME='Fedora Linux' 00:00:42.692 ++ VERSION='38 (Cloud Edition)' 00:00:42.692 ++ ID=fedora 00:00:42.692 ++ VERSION_ID=38 00:00:42.692 ++ VERSION_CODENAME= 00:00:42.692 ++ PLATFORM_ID=platform:f38 00:00:42.692 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:42.692 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:42.692 ++ LOGO=fedora-logo-icon 00:00:42.692 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:42.692 ++ HOME_URL=https://fedoraproject.org/ 00:00:42.692 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:42.692 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:42.692 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:42.692 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:42.692 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:42.692 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:42.692 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:42.692 ++ SUPPORT_END=2024-05-14 00:00:42.692 ++ VARIANT='Cloud Edition' 00:00:42.692 ++ VARIANT_ID=cloud 00:00:42.692 + uname -a 00:00:42.692 Linux spdk-wfp-21 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:42.692 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:00:46.880 Hugepages 00:00:46.880 node hugesize free / total 00:00:46.880 node0 1048576kB 0 / 0 00:00:46.880 node0 2048kB 0 / 0 00:00:46.880 node1 1048576kB 0 / 0 00:00:46.880 node1 2048kB 0 / 0 00:00:46.880 00:00:46.880 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:46.880 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:46.880 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:46.880 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:46.880 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:46.880 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:46.880 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:46.880 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:46.880 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:46.880 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:46.880 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:46.880 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:46.880 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:46.880 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:46.880 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:46.880 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:46.880 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:46.880 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:46.880 + rm -f /tmp/spdk-ld-path 00:00:46.880 + source autorun-spdk.conf 00:00:46.880 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.880 ++ SPDK_TEST_NVMF=1 00:00:46.880 ++ SPDK_TEST_NVME_CLI=1 00:00:46.880 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:46.880 ++ SPDK_RUN_UBSAN=1 00:00:46.880 ++ NET_TYPE=phy 00:00:46.880 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:46.880 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:46.880 ++ RUN_NIGHTLY=1 00:00:46.880 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:46.880 + [[ -n '' ]] 00:00:46.880 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:46.880 + for M in /var/spdk/build-*-manifest.txt 00:00:46.880 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:46.880 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:46.880 + for M in /var/spdk/build-*-manifest.txt 00:00:46.880 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:46.880 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:46.880 ++ uname 00:00:46.880 + [[ Linux == \L\i\n\u\x ]] 00:00:46.880 + sudo dmesg -T 00:00:46.880 + sudo dmesg --clear 00:00:46.880 + dmesg_pid=2044042 00:00:46.880 + sudo dmesg -Tw 00:00:46.880 + [[ Fedora Linux == FreeBSD ]] 00:00:46.880 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:46.880 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:46.880 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:46.880 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:46.880 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:46.880 + [[ -x /usr/src/fio-static/fio ]] 00:00:46.880 + export FIO_BIN=/usr/src/fio-static/fio 00:00:46.880 + FIO_BIN=/usr/src/fio-static/fio 00:00:46.880 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:46.880 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:46.880 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:46.880 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:46.880 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:46.880 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:46.880 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:46.880 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:46.880 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:46.880 Test configuration: 00:00:46.880 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.880 SPDK_TEST_NVMF=1 00:00:46.880 SPDK_TEST_NVME_CLI=1 00:00:46.880 SPDK_TEST_NVMF_NICS=mlx5 00:00:46.880 SPDK_RUN_UBSAN=1 00:00:46.880 NET_TYPE=phy 00:00:46.880 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:46.880 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:46.880 RUN_NIGHTLY=1 11:24:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:00:46.880 11:24:15 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:46.880 11:24:15 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:46.880 11:24:15 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:46.880 11:24:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.880 11:24:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.880 11:24:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.880 11:24:15 -- paths/export.sh@5 -- $ export PATH 00:00:46.880 11:24:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.880 11:24:15 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:00:46.880 11:24:15 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:46.881 11:24:15 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1721553855.XXXXXX 00:00:46.881 11:24:15 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1721553855.lJk3BK 00:00:46.881 11:24:15 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:46.881 11:24:15 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:00:46.881 11:24:15 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:46.881 11:24:15 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:00:46.881 11:24:15 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:46.881 11:24:15 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:46.881 11:24:15 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:46.881 11:24:15 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:00:46.881 11:24:15 -- common/autotest_common.sh@10 -- $ set +x 00:00:46.881 11:24:15 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:00:46.881 11:24:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:46.881 11:24:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:46.881 11:24:15 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:46.881 11:24:15 -- spdk/autobuild.sh@16 -- $ date -u 00:00:46.881 Sun Jul 21 09:24:15 AM UTC 2024 00:00:46.881 11:24:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:46.881 LTS-59-g4b94202c6 00:00:46.881 11:24:15 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:46.881 11:24:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:46.881 11:24:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:46.881 11:24:15 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:00:46.881 11:24:15 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:00:46.881 11:24:15 -- common/autotest_common.sh@10 -- $ set +x 00:00:46.881 ************************************ 00:00:46.881 START TEST ubsan 00:00:46.881 ************************************ 00:00:46.881 11:24:15 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:00:46.881 using ubsan 00:00:46.881 00:00:46.881 real 0m0.000s 00:00:46.881 user 0m0.000s 00:00:46.881 sys 0m0.000s 00:00:46.881 11:24:15 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:46.881 11:24:15 -- common/autotest_common.sh@10 -- $ set +x 00:00:46.881 ************************************ 00:00:46.881 END TEST ubsan 00:00:46.881 ************************************ 00:00:46.881 11:24:15 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:00:46.881 11:24:15 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:46.881 11:24:15 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:46.881 11:24:15 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:00:46.881 11:24:15 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:00:46.881 11:24:15 -- common/autotest_common.sh@10 -- $ set +x 00:00:46.881 ************************************ 00:00:46.881 START TEST build_native_dpdk 00:00:46.881 ************************************ 00:00:46.881 11:24:15 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:00:46.881 11:24:15 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:46.881 11:24:15 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:46.881 11:24:15 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:46.881 11:24:15 -- common/autobuild_common.sh@51 -- $ local compiler 00:00:46.881 11:24:15 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:46.881 11:24:15 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:46.881 11:24:15 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:46.881 11:24:15 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:46.881 11:24:15 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:46.881 11:24:15 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:46.881 11:24:15 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:46.881 11:24:15 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:46.881 11:24:15 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:46.881 11:24:15 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:46.881 11:24:15 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:46.881 11:24:15 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:46.881 11:24:15 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:00:46.881 11:24:15 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:00:46.881 11:24:15 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:46.881 11:24:15 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:00:46.881 eeb0605f11 version: 23.11.0 00:00:46.881 238778122a doc: update release notes for 23.11 00:00:46.881 46aa6b3cfc doc: fix description of RSS features 00:00:46.881 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:46.881 7e421ae345 devtools: support skipping forbid rule check 00:00:46.881 11:24:15 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:46.881 11:24:15 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:46.881 11:24:15 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:00:46.881 11:24:15 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:46.881 11:24:15 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:46.881 11:24:15 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:46.881 11:24:15 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:46.881 11:24:15 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:46.881 11:24:15 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:46.881 11:24:15 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:46.881 11:24:15 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:46.881 11:24:15 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:46.881 11:24:15 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:46.881 11:24:15 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:46.881 11:24:15 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:00:46.881 11:24:15 -- common/autobuild_common.sh@168 -- $ uname -s 00:00:46.881 11:24:15 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:46.881 11:24:15 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:00:46.881 11:24:15 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:00:46.881 11:24:15 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:00:46.881 11:24:15 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:00:46.881 11:24:15 -- scripts/common.sh@335 -- $ IFS=.-: 00:00:46.881 11:24:15 -- scripts/common.sh@335 -- $ read -ra ver1 00:00:46.881 11:24:15 -- scripts/common.sh@336 -- $ IFS=.-: 00:00:46.881 11:24:15 -- scripts/common.sh@336 -- $ read -ra ver2 00:00:46.881 11:24:15 -- scripts/common.sh@337 -- $ local 'op=<' 00:00:46.881 11:24:15 -- scripts/common.sh@339 -- $ ver1_l=3 00:00:46.881 11:24:15 -- scripts/common.sh@340 -- $ ver2_l=3 00:00:46.881 11:24:15 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:00:46.881 11:24:15 -- scripts/common.sh@343 -- $ case "$op" in 00:00:46.881 11:24:15 -- scripts/common.sh@344 -- $ : 1 00:00:46.881 11:24:15 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:00:46.881 11:24:15 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:46.881 11:24:15 -- scripts/common.sh@364 -- $ decimal 23 00:00:46.881 11:24:15 -- scripts/common.sh@352 -- $ local d=23 00:00:46.881 11:24:15 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:00:46.881 11:24:15 -- scripts/common.sh@354 -- $ echo 23 00:00:46.881 11:24:15 -- scripts/common.sh@364 -- $ ver1[v]=23 00:00:46.881 11:24:15 -- scripts/common.sh@365 -- $ decimal 21 00:00:46.881 11:24:15 -- scripts/common.sh@352 -- $ local d=21 00:00:46.881 11:24:15 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:46.881 11:24:15 -- scripts/common.sh@354 -- $ echo 21 00:00:46.881 11:24:15 -- scripts/common.sh@365 -- $ ver2[v]=21 00:00:46.881 11:24:15 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:00:46.881 11:24:15 -- scripts/common.sh@366 -- $ return 1 00:00:46.881 11:24:15 -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:46.881 patching file config/rte_config.h 00:00:46.881 Hunk #1 succeeded at 60 (offset 1 line). 00:00:46.881 11:24:15 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:00:46.881 11:24:15 -- common/autobuild_common.sh@178 -- $ uname -s 00:00:46.881 11:24:15 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:00:46.881 11:24:15 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:46.881 11:24:15 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:52.151 The Meson build system 00:00:52.151 Version: 1.3.1 00:00:52.151 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:00:52.151 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:00:52.151 Build type: native build 00:00:52.151 Program cat found: YES (/usr/bin/cat) 00:00:52.151 Project name: DPDK 00:00:52.151 Project version: 23.11.0 00:00:52.151 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:52.151 C linker for the host machine: gcc ld.bfd 2.39-16 00:00:52.151 Host machine cpu family: x86_64 00:00:52.151 Host machine cpu: x86_64 00:00:52.151 Message: ## Building in Developer Mode ## 00:00:52.151 Program pkg-config found: YES (/usr/bin/pkg-config) 00:00:52.151 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:00:52.151 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:00:52.151 Program python3 found: YES (/usr/bin/python3) 00:00:52.151 Program cat found: YES (/usr/bin/cat) 00:00:52.151 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:00:52.151 Compiler for C supports arguments -march=native: YES 00:00:52.151 Checking for size of "void *" : 8 00:00:52.151 Checking for size of "void *" : 8 (cached) 00:00:52.151 Library m found: YES 00:00:52.151 Library numa found: YES 00:00:52.151 Has header "numaif.h" : YES 00:00:52.151 Library fdt found: NO 00:00:52.151 Library execinfo found: NO 00:00:52.151 Has header "execinfo.h" : YES 00:00:52.151 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:52.151 Run-time dependency libarchive found: NO (tried pkgconfig) 00:00:52.151 Run-time dependency libbsd found: NO (tried pkgconfig) 00:00:52.151 Run-time dependency jansson found: NO (tried pkgconfig) 00:00:52.151 Run-time dependency openssl found: YES 3.0.9 00:00:52.151 Run-time dependency libpcap found: YES 1.10.4 00:00:52.151 Has header "pcap.h" with dependency libpcap: YES 00:00:52.151 Compiler for C supports arguments -Wcast-qual: YES 00:00:52.151 Compiler for C supports arguments -Wdeprecated: YES 00:00:52.151 Compiler for C supports arguments -Wformat: YES 00:00:52.151 Compiler for C supports arguments -Wformat-nonliteral: NO 00:00:52.151 Compiler for C supports arguments -Wformat-security: NO 00:00:52.151 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:52.151 Compiler for C supports arguments -Wmissing-prototypes: YES 00:00:52.151 Compiler for C supports arguments -Wnested-externs: YES 00:00:52.151 Compiler for C supports arguments -Wold-style-definition: YES 00:00:52.151 Compiler for C supports arguments -Wpointer-arith: YES 00:00:52.151 Compiler for C supports arguments -Wsign-compare: YES 00:00:52.151 Compiler for C supports arguments -Wstrict-prototypes: YES 00:00:52.151 Compiler for C supports arguments -Wundef: YES 00:00:52.151 Compiler for C supports arguments -Wwrite-strings: YES 00:00:52.151 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:00:52.151 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:00:52.151 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:52.151 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:00:52.151 Program objdump found: YES (/usr/bin/objdump) 00:00:52.151 Compiler for C supports arguments -mavx512f: YES 00:00:52.151 Checking if "AVX512 checking" compiles: YES 00:00:52.151 Fetching value of define "__SSE4_2__" : 1 00:00:52.151 Fetching value of define "__AES__" : 1 00:00:52.151 Fetching value of define "__AVX__" : 1 00:00:52.151 Fetching value of define "__AVX2__" : 1 00:00:52.151 Fetching value of define "__AVX512BW__" : 1 00:00:52.151 Fetching value of define "__AVX512CD__" : 1 00:00:52.151 Fetching value of define "__AVX512DQ__" : 1 00:00:52.151 Fetching value of define "__AVX512F__" : 1 00:00:52.151 Fetching value of define "__AVX512VL__" : 1 00:00:52.151 Fetching value of define "__PCLMUL__" : 1 00:00:52.151 Fetching value of define "__RDRND__" : 1 00:00:52.151 Fetching value of define "__RDSEED__" : 1 00:00:52.151 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:00:52.151 Fetching value of define "__znver1__" : (undefined) 00:00:52.151 Fetching value of define "__znver2__" : (undefined) 00:00:52.151 Fetching value of define "__znver3__" : (undefined) 00:00:52.151 Fetching value of define "__znver4__" : (undefined) 00:00:52.151 Compiler for C supports arguments -Wno-format-truncation: YES 00:00:52.151 Message: lib/log: Defining dependency "log" 00:00:52.151 Message: lib/kvargs: Defining dependency "kvargs" 00:00:52.151 Message: lib/telemetry: Defining dependency "telemetry" 00:00:52.151 Checking for function "getentropy" : NO 00:00:52.151 Message: lib/eal: Defining dependency "eal" 00:00:52.151 Message: lib/ring: Defining dependency "ring" 00:00:52.151 Message: lib/rcu: Defining dependency "rcu" 00:00:52.151 Message: lib/mempool: Defining dependency "mempool" 00:00:52.151 Message: lib/mbuf: Defining dependency "mbuf" 00:00:52.151 Fetching value of define "__PCLMUL__" : 1 (cached) 00:00:52.151 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:52.151 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:52.151 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:52.151 Fetching value of define "__AVX512VL__" : 1 (cached) 00:00:52.151 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:00:52.151 Compiler for C supports arguments -mpclmul: YES 00:00:52.151 Compiler for C supports arguments -maes: YES 00:00:52.151 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:52.151 Compiler for C supports arguments -mavx512bw: YES 00:00:52.151 Compiler for C supports arguments -mavx512dq: YES 00:00:52.151 Compiler for C supports arguments -mavx512vl: YES 00:00:52.151 Compiler for C supports arguments -mvpclmulqdq: YES 00:00:52.151 Compiler for C supports arguments -mavx2: YES 00:00:52.151 Compiler for C supports arguments -mavx: YES 00:00:52.151 Message: lib/net: Defining dependency "net" 00:00:52.151 Message: lib/meter: Defining dependency "meter" 00:00:52.151 Message: lib/ethdev: Defining dependency "ethdev" 00:00:52.151 Message: lib/pci: Defining dependency "pci" 00:00:52.151 Message: lib/cmdline: Defining dependency "cmdline" 00:00:52.151 Message: lib/metrics: Defining dependency "metrics" 00:00:52.151 Message: lib/hash: Defining dependency "hash" 00:00:52.151 Message: lib/timer: Defining dependency "timer" 00:00:52.151 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:52.151 Fetching value of define "__AVX512VL__" : 1 (cached) 00:00:52.151 Fetching value of define "__AVX512CD__" : 1 (cached) 00:00:52.151 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:52.151 Message: lib/acl: Defining dependency "acl" 00:00:52.151 Message: lib/bbdev: Defining dependency "bbdev" 00:00:52.151 Message: lib/bitratestats: Defining dependency "bitratestats" 00:00:52.151 Run-time dependency libelf found: YES 0.190 00:00:52.151 Message: lib/bpf: Defining dependency "bpf" 00:00:52.151 Message: lib/cfgfile: Defining dependency "cfgfile" 00:00:52.151 Message: lib/compressdev: Defining dependency "compressdev" 00:00:52.151 Message: lib/cryptodev: Defining dependency "cryptodev" 00:00:52.151 Message: lib/distributor: Defining dependency "distributor" 00:00:52.151 Message: lib/dmadev: Defining dependency "dmadev" 00:00:52.151 Message: lib/efd: Defining dependency "efd" 00:00:52.151 Message: lib/eventdev: Defining dependency "eventdev" 00:00:52.151 Message: lib/dispatcher: Defining dependency "dispatcher" 00:00:52.151 Message: lib/gpudev: Defining dependency "gpudev" 00:00:52.151 Message: lib/gro: Defining dependency "gro" 00:00:52.151 Message: lib/gso: Defining dependency "gso" 00:00:52.151 Message: lib/ip_frag: Defining dependency "ip_frag" 00:00:52.151 Message: lib/jobstats: Defining dependency "jobstats" 00:00:52.151 Message: lib/latencystats: Defining dependency "latencystats" 00:00:52.151 Message: lib/lpm: Defining dependency "lpm" 00:00:52.151 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:52.151 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:52.151 Fetching value of define "__AVX512IFMA__" : (undefined) 00:00:52.151 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:00:52.151 Message: lib/member: Defining dependency "member" 00:00:52.152 Message: lib/pcapng: Defining dependency "pcapng" 00:00:52.152 Compiler for C supports arguments -Wno-cast-qual: YES 00:00:52.152 Message: lib/power: Defining dependency "power" 00:00:52.152 Message: lib/rawdev: Defining dependency "rawdev" 00:00:52.152 Message: lib/regexdev: Defining dependency "regexdev" 00:00:52.152 Message: lib/mldev: Defining dependency "mldev" 00:00:52.152 Message: lib/rib: Defining dependency "rib" 00:00:52.152 Message: lib/reorder: Defining dependency "reorder" 00:00:52.152 Message: lib/sched: Defining dependency "sched" 00:00:52.152 Message: lib/security: Defining dependency "security" 00:00:52.152 Message: lib/stack: Defining dependency "stack" 00:00:52.152 Has header "linux/userfaultfd.h" : YES 00:00:52.152 Has header "linux/vduse.h" : YES 00:00:52.152 Message: lib/vhost: Defining dependency "vhost" 00:00:52.152 Message: lib/ipsec: Defining dependency "ipsec" 00:00:52.152 Message: lib/pdcp: Defining dependency "pdcp" 00:00:52.152 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:52.152 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:52.152 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:52.152 Message: lib/fib: Defining dependency "fib" 00:00:52.152 Message: lib/port: Defining dependency "port" 00:00:52.152 Message: lib/pdump: Defining dependency "pdump" 00:00:52.152 Message: lib/table: Defining dependency "table" 00:00:52.152 Message: lib/pipeline: Defining dependency "pipeline" 00:00:52.152 Message: lib/graph: Defining dependency "graph" 00:00:52.152 Message: lib/node: Defining dependency "node" 00:00:52.152 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:00:52.734 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:00:52.734 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:00:52.734 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:00:52.734 Compiler for C supports arguments -Wno-sign-compare: YES 00:00:52.734 Compiler for C supports arguments -Wno-unused-value: YES 00:00:52.734 Compiler for C supports arguments -Wno-format: YES 00:00:52.734 Compiler for C supports arguments -Wno-format-security: YES 00:00:52.734 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:00:52.734 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:00:52.734 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:00:52.734 Compiler for C supports arguments -Wno-unused-parameter: YES 00:00:52.734 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:52.734 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:52.734 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:52.734 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:52.734 Compiler for C supports arguments -march=skylake-avx512: YES 00:00:52.734 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:00:52.734 Has header "sys/epoll.h" : YES 00:00:52.734 Program doxygen found: YES (/usr/bin/doxygen) 00:00:52.734 Configuring doxy-api-html.conf using configuration 00:00:52.734 Configuring doxy-api-man.conf using configuration 00:00:52.734 Program mandb found: YES (/usr/bin/mandb) 00:00:52.734 Program sphinx-build found: NO 00:00:52.734 Configuring rte_build_config.h using configuration 00:00:52.734 Message: 00:00:52.734 ================= 00:00:52.734 Applications Enabled 00:00:52.734 ================= 00:00:52.734 00:00:52.734 apps: 00:00:52.734 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:00:52.734 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:00:52.734 test-pmd, test-regex, test-sad, test-security-perf, 00:00:52.734 00:00:52.734 Message: 00:00:52.734 ================= 00:00:52.734 Libraries Enabled 00:00:52.734 ================= 00:00:52.734 00:00:52.734 libs: 00:00:52.734 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:00:52.734 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:00:52.734 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:00:52.734 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:00:52.734 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:00:52.734 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:00:52.734 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:00:52.734 00:00:52.734 00:00:52.734 Message: 00:00:52.734 =============== 00:00:52.734 Drivers Enabled 00:00:52.734 =============== 00:00:52.734 00:00:52.734 common: 00:00:52.734 00:00:52.734 bus: 00:00:52.734 pci, vdev, 00:00:52.734 mempool: 00:00:52.734 ring, 00:00:52.734 dma: 00:00:52.734 00:00:52.734 net: 00:00:52.734 i40e, 00:00:52.734 raw: 00:00:52.734 00:00:52.734 crypto: 00:00:52.734 00:00:52.734 compress: 00:00:52.734 00:00:52.734 regex: 00:00:52.734 00:00:52.734 ml: 00:00:52.734 00:00:52.734 vdpa: 00:00:52.734 00:00:52.734 event: 00:00:52.734 00:00:52.734 baseband: 00:00:52.734 00:00:52.734 gpu: 00:00:52.734 00:00:52.734 00:00:52.734 Message: 00:00:52.734 ================= 00:00:52.734 Content Skipped 00:00:52.734 ================= 00:00:52.734 00:00:52.734 apps: 00:00:52.734 00:00:52.734 libs: 00:00:52.734 00:00:52.734 drivers: 00:00:52.734 common/cpt: not in enabled drivers build config 00:00:52.734 common/dpaax: not in enabled drivers build config 00:00:52.734 common/iavf: not in enabled drivers build config 00:00:52.734 common/idpf: not in enabled drivers build config 00:00:52.734 common/mvep: not in enabled drivers build config 00:00:52.734 common/octeontx: not in enabled drivers build config 00:00:52.734 bus/auxiliary: not in enabled drivers build config 00:00:52.734 bus/cdx: not in enabled drivers build config 00:00:52.734 bus/dpaa: not in enabled drivers build config 00:00:52.734 bus/fslmc: not in enabled drivers build config 00:00:52.734 bus/ifpga: not in enabled drivers build config 00:00:52.734 bus/platform: not in enabled drivers build config 00:00:52.734 bus/vmbus: not in enabled drivers build config 00:00:52.734 common/cnxk: not in enabled drivers build config 00:00:52.734 common/mlx5: not in enabled drivers build config 00:00:52.734 common/nfp: not in enabled drivers build config 00:00:52.734 common/qat: not in enabled drivers build config 00:00:52.734 common/sfc_efx: not in enabled drivers build config 00:00:52.734 mempool/bucket: not in enabled drivers build config 00:00:52.734 mempool/cnxk: not in enabled drivers build config 00:00:52.734 mempool/dpaa: not in enabled drivers build config 00:00:52.734 mempool/dpaa2: not in enabled drivers build config 00:00:52.734 mempool/octeontx: not in enabled drivers build config 00:00:52.734 mempool/stack: not in enabled drivers build config 00:00:52.734 dma/cnxk: not in enabled drivers build config 00:00:52.734 dma/dpaa: not in enabled drivers build config 00:00:52.734 dma/dpaa2: not in enabled drivers build config 00:00:52.734 dma/hisilicon: not in enabled drivers build config 00:00:52.734 dma/idxd: not in enabled drivers build config 00:00:52.734 dma/ioat: not in enabled drivers build config 00:00:52.734 dma/skeleton: not in enabled drivers build config 00:00:52.734 net/af_packet: not in enabled drivers build config 00:00:52.734 net/af_xdp: not in enabled drivers build config 00:00:52.734 net/ark: not in enabled drivers build config 00:00:52.734 net/atlantic: not in enabled drivers build config 00:00:52.734 net/avp: not in enabled drivers build config 00:00:52.734 net/axgbe: not in enabled drivers build config 00:00:52.734 net/bnx2x: not in enabled drivers build config 00:00:52.734 net/bnxt: not in enabled drivers build config 00:00:52.734 net/bonding: not in enabled drivers build config 00:00:52.734 net/cnxk: not in enabled drivers build config 00:00:52.734 net/cpfl: not in enabled drivers build config 00:00:52.734 net/cxgbe: not in enabled drivers build config 00:00:52.734 net/dpaa: not in enabled drivers build config 00:00:52.734 net/dpaa2: not in enabled drivers build config 00:00:52.734 net/e1000: not in enabled drivers build config 00:00:52.734 net/ena: not in enabled drivers build config 00:00:52.734 net/enetc: not in enabled drivers build config 00:00:52.734 net/enetfec: not in enabled drivers build config 00:00:52.734 net/enic: not in enabled drivers build config 00:00:52.734 net/failsafe: not in enabled drivers build config 00:00:52.734 net/fm10k: not in enabled drivers build config 00:00:52.734 net/gve: not in enabled drivers build config 00:00:52.734 net/hinic: not in enabled drivers build config 00:00:52.734 net/hns3: not in enabled drivers build config 00:00:52.734 net/iavf: not in enabled drivers build config 00:00:52.734 net/ice: not in enabled drivers build config 00:00:52.734 net/idpf: not in enabled drivers build config 00:00:52.735 net/igc: not in enabled drivers build config 00:00:52.735 net/ionic: not in enabled drivers build config 00:00:52.735 net/ipn3ke: not in enabled drivers build config 00:00:52.735 net/ixgbe: not in enabled drivers build config 00:00:52.735 net/mana: not in enabled drivers build config 00:00:52.735 net/memif: not in enabled drivers build config 00:00:52.735 net/mlx4: not in enabled drivers build config 00:00:52.735 net/mlx5: not in enabled drivers build config 00:00:52.735 net/mvneta: not in enabled drivers build config 00:00:52.735 net/mvpp2: not in enabled drivers build config 00:00:52.735 net/netvsc: not in enabled drivers build config 00:00:52.735 net/nfb: not in enabled drivers build config 00:00:52.735 net/nfp: not in enabled drivers build config 00:00:52.735 net/ngbe: not in enabled drivers build config 00:00:52.735 net/null: not in enabled drivers build config 00:00:52.735 net/octeontx: not in enabled drivers build config 00:00:52.735 net/octeon_ep: not in enabled drivers build config 00:00:52.735 net/pcap: not in enabled drivers build config 00:00:52.735 net/pfe: not in enabled drivers build config 00:00:52.735 net/qede: not in enabled drivers build config 00:00:52.735 net/ring: not in enabled drivers build config 00:00:52.735 net/sfc: not in enabled drivers build config 00:00:52.735 net/softnic: not in enabled drivers build config 00:00:52.735 net/tap: not in enabled drivers build config 00:00:52.735 net/thunderx: not in enabled drivers build config 00:00:52.735 net/txgbe: not in enabled drivers build config 00:00:52.735 net/vdev_netvsc: not in enabled drivers build config 00:00:52.735 net/vhost: not in enabled drivers build config 00:00:52.735 net/virtio: not in enabled drivers build config 00:00:52.735 net/vmxnet3: not in enabled drivers build config 00:00:52.735 raw/cnxk_bphy: not in enabled drivers build config 00:00:52.735 raw/cnxk_gpio: not in enabled drivers build config 00:00:52.735 raw/dpaa2_cmdif: not in enabled drivers build config 00:00:52.735 raw/ifpga: not in enabled drivers build config 00:00:52.735 raw/ntb: not in enabled drivers build config 00:00:52.735 raw/skeleton: not in enabled drivers build config 00:00:52.735 crypto/armv8: not in enabled drivers build config 00:00:52.735 crypto/bcmfs: not in enabled drivers build config 00:00:52.735 crypto/caam_jr: not in enabled drivers build config 00:00:52.735 crypto/ccp: not in enabled drivers build config 00:00:52.735 crypto/cnxk: not in enabled drivers build config 00:00:52.735 crypto/dpaa_sec: not in enabled drivers build config 00:00:52.735 crypto/dpaa2_sec: not in enabled drivers build config 00:00:52.735 crypto/ipsec_mb: not in enabled drivers build config 00:00:52.735 crypto/mlx5: not in enabled drivers build config 00:00:52.735 crypto/mvsam: not in enabled drivers build config 00:00:52.735 crypto/nitrox: not in enabled drivers build config 00:00:52.735 crypto/null: not in enabled drivers build config 00:00:52.735 crypto/octeontx: not in enabled drivers build config 00:00:52.735 crypto/openssl: not in enabled drivers build config 00:00:52.735 crypto/scheduler: not in enabled drivers build config 00:00:52.735 crypto/uadk: not in enabled drivers build config 00:00:52.735 crypto/virtio: not in enabled drivers build config 00:00:52.735 compress/isal: not in enabled drivers build config 00:00:52.735 compress/mlx5: not in enabled drivers build config 00:00:52.735 compress/octeontx: not in enabled drivers build config 00:00:52.735 compress/zlib: not in enabled drivers build config 00:00:52.735 regex/mlx5: not in enabled drivers build config 00:00:52.735 regex/cn9k: not in enabled drivers build config 00:00:52.735 ml/cnxk: not in enabled drivers build config 00:00:52.735 vdpa/ifc: not in enabled drivers build config 00:00:52.735 vdpa/mlx5: not in enabled drivers build config 00:00:52.735 vdpa/nfp: not in enabled drivers build config 00:00:52.735 vdpa/sfc: not in enabled drivers build config 00:00:52.735 event/cnxk: not in enabled drivers build config 00:00:52.735 event/dlb2: not in enabled drivers build config 00:00:52.735 event/dpaa: not in enabled drivers build config 00:00:52.735 event/dpaa2: not in enabled drivers build config 00:00:52.735 event/dsw: not in enabled drivers build config 00:00:52.735 event/opdl: not in enabled drivers build config 00:00:52.735 event/skeleton: not in enabled drivers build config 00:00:52.735 event/sw: not in enabled drivers build config 00:00:52.735 event/octeontx: not in enabled drivers build config 00:00:52.735 baseband/acc: not in enabled drivers build config 00:00:52.735 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:00:52.735 baseband/fpga_lte_fec: not in enabled drivers build config 00:00:52.735 baseband/la12xx: not in enabled drivers build config 00:00:52.735 baseband/null: not in enabled drivers build config 00:00:52.735 baseband/turbo_sw: not in enabled drivers build config 00:00:52.735 gpu/cuda: not in enabled drivers build config 00:00:52.735 00:00:52.735 00:00:52.735 Build targets in project: 217 00:00:52.735 00:00:52.735 DPDK 23.11.0 00:00:52.735 00:00:52.735 User defined options 00:00:52.735 libdir : lib 00:00:52.735 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:52.735 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:00:52.735 c_link_args : 00:00:52.735 enable_docs : false 00:00:52.735 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:52.735 enable_kmods : false 00:00:52.735 machine : native 00:00:52.735 tests : false 00:00:52.735 00:00:52.735 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:52.735 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:00:52.735 11:24:21 -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 00:00:52.735 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:00:52.735 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:00:52.735 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:00:52.735 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:00:53.000 [4/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:00:53.000 [5/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:00:53.000 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:00:53.000 [7/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:00:53.000 [8/707] Linking static target lib/librte_kvargs.a 00:00:53.000 [9/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:00:53.000 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:00:53.000 [11/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:00:53.000 [12/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:00:53.000 [13/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:00:53.000 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:00:53.000 [15/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:00:53.000 [16/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:00:53.000 [17/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:00:53.000 [18/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:00:53.000 [19/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:00:53.000 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:00:53.000 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:00:53.000 [22/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:00:53.000 [23/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:00:53.000 [24/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:00:53.000 [25/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:00:53.000 [26/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:00:53.262 [27/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:00:53.262 [28/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:00:53.262 [29/707] Linking static target lib/librte_pci.a 00:00:53.262 [30/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:00:53.262 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:00:53.262 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:00:53.262 [33/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:00:53.262 [34/707] Linking static target lib/librte_log.a 00:00:53.262 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:00:53.262 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:00:53.262 [37/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.523 [38/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:00:53.523 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:00:53.523 [40/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.523 [41/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:00:53.523 [42/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:00:53.523 [43/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:00:53.523 [44/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:00:53.523 [45/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:00:53.523 [46/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:00:53.523 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:00:53.523 [48/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:00:53.523 [49/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:00:53.523 [50/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:00:53.523 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:00:53.523 [52/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:00:53.523 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:00:53.523 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:00:53.523 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:00:53.523 [56/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:00:53.523 [57/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:00:53.523 [58/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:00:53.523 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:00:53.523 [60/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:00:53.523 [61/707] Linking static target lib/librte_meter.a 00:00:53.523 [62/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:00:53.523 [63/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:00:53.523 [64/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:00:53.523 [65/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:00:53.523 [66/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:00:53.523 [67/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:00:53.523 [68/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:00:53.523 [69/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:00:53.523 [70/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:00:53.523 [71/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:00:53.781 [72/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:00:53.781 [73/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:00:53.781 [74/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:00:53.781 [75/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:00:53.781 [76/707] Linking static target lib/librte_ring.a 00:00:53.781 [77/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:00:53.781 [78/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:00:53.781 [79/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:00:53.781 [80/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:00:53.781 [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:00:53.781 [82/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:00:53.781 [83/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:00:53.781 [84/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:00:53.781 [85/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:00:53.781 [86/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:00:53.781 [87/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:00:53.781 [88/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:00:53.781 [89/707] Linking static target lib/librte_cmdline.a 00:00:53.781 [90/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:00:53.781 [91/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:00:53.781 [92/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:00:53.781 [93/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:00:53.781 [94/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:00:53.781 [95/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:00:53.781 [96/707] Linking static target lib/librte_metrics.a 00:00:53.781 [97/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:00:53.781 [98/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:00:53.781 [99/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:00:53.781 [100/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:00:53.781 [101/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:00:53.781 [102/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:00:53.781 [103/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:00:53.781 [104/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:00:53.781 [105/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:00:53.781 [106/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:00:53.781 [107/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:00:53.781 [108/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:00:53.781 [109/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:00:53.781 [110/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:00:53.781 [111/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:00:53.781 [112/707] Linking static target lib/librte_bitratestats.a 00:00:53.781 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:00:53.781 [114/707] Linking static target lib/librte_cfgfile.a 00:00:53.781 [115/707] Linking static target lib/librte_net.a 00:00:53.781 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:00:53.781 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:00:54.041 [118/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:00:54.041 [119/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:00:54.041 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:00:54.041 [121/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:00:54.041 [122/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:00:54.041 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:00:54.041 [124/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:00:54.041 [125/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.041 [126/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:00:54.041 [127/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.041 [128/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:00:54.041 [129/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:00:54.041 [130/707] Linking target lib/librte_log.so.24.0 00:00:54.041 [131/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:00:54.041 [132/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:00:54.041 [133/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:00:54.041 [134/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:00:54.041 [135/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.041 [136/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:00:54.041 [137/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:00:54.041 [138/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:00:54.041 [139/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:00:54.041 [140/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:00:54.041 [141/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:00:54.041 [142/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:00:54.041 [143/707] Linking static target lib/librte_timer.a 00:00:54.041 [144/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:00:54.041 [145/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.301 [146/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:00:54.301 [147/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:00:54.301 [148/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:00:54.301 [149/707] Linking static target lib/librte_mempool.a 00:00:54.301 [150/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:00:54.301 [151/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:00:54.301 [152/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:00:54.301 [153/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:00:54.301 [154/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:00:54.301 [155/707] Linking static target lib/librte_bbdev.a 00:00:54.301 [156/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.301 [157/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:00:54.301 [158/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:00:54.301 [159/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:00:54.301 [160/707] Linking target lib/librte_kvargs.so.24.0 00:00:54.301 [161/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:00:54.301 [162/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:00:54.301 [163/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:00:54.301 [164/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:00:54.301 [165/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:00:54.301 [166/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:00:54.301 [167/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:00:54.301 [168/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:00:54.301 [169/707] Linking static target lib/librte_jobstats.a 00:00:54.301 [170/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:00:54.301 [171/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.301 [172/707] Linking static target lib/librte_compressdev.a 00:00:54.301 [173/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:00:54.301 [174/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:00:54.301 [175/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:00:54.301 [176/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.301 [177/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:00:54.301 [178/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:00:54.563 [179/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:00:54.563 [180/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:00:54.563 [181/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:00:54.563 [182/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:00:54.563 [183/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:00:54.563 [184/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:00:54.563 [185/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:00:54.563 [186/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:00:54.563 [187/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:00:54.563 [188/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:00:54.563 [189/707] Linking static target lib/librte_dispatcher.a 00:00:54.563 [190/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:00:54.563 [191/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:00:54.563 [192/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:00:54.563 [193/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:00:54.563 [194/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:00:54.563 [195/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:00:54.563 [196/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:00:54.563 [197/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:00:54.563 [198/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:00:54.563 [199/707] Linking static target lib/librte_latencystats.a 00:00:54.563 [200/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:00:54.563 [201/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:00:54.563 [202/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:00:54.563 [203/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:00:54.563 [204/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:00:54.563 [205/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:00:54.563 [206/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:00:54.563 [207/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:00:54.563 [208/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:00:54.563 [209/707] Linking static target lib/librte_rcu.a 00:00:54.563 [210/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:00:54.563 [211/707] Linking static target lib/librte_telemetry.a 00:00:54.563 [212/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:00:54.563 [213/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:00:54.563 [214/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:00:54.563 [215/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:00:54.563 [216/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:00:54.563 [217/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:00:54.563 [218/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:00:54.826 [219/707] Linking static target lib/librte_gro.a 00:00:54.826 [220/707] Linking static target lib/librte_gpudev.a 00:00:54.826 [221/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.826 [222/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:00:54.826 [223/707] Linking static target lib/librte_stack.a 00:00:54.826 [224/707] Linking static target lib/librte_eal.a 00:00:54.826 [225/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:00:54.826 [226/707] Linking static target lib/librte_dmadev.a 00:00:54.826 [227/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:00:54.826 [228/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:00:54.826 [229/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:00:54.826 [230/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:00:54.826 [231/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:00:54.826 [232/707] Linking static target lib/librte_distributor.a 00:00:54.826 [233/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:00:54.826 [234/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:00:54.826 [235/707] Linking static target lib/librte_regexdev.a 00:00:54.826 [236/707] Linking static target lib/librte_gso.a 00:00:54.826 [237/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:00:54.826 [238/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:00:54.826 [239/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:00:54.826 [240/707] Linking static target lib/librte_rawdev.a 00:00:54.826 [241/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:00:54.826 [242/707] Linking static target lib/librte_mldev.a 00:00:54.826 [243/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.826 [244/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:00:54.826 [245/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:00:54.826 [246/707] Linking static target lib/librte_mbuf.a 00:00:54.826 [247/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:00:54.826 [248/707] Linking static target lib/librte_ip_frag.a 00:00:54.826 [249/707] Linking static target lib/librte_power.a 00:00:54.826 [250/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:00:54.826 [251/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:00:55.088 [252/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:00:55.088 [253/707] Linking static target lib/librte_reorder.a 00:00:55.088 [254/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:00:55.088 [255/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:00:55.088 [256/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:00:55.088 [257/707] Linking static target lib/librte_pcapng.a 00:00:55.088 [258/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:00:55.088 [259/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:00:55.088 [260/707] Linking static target lib/librte_bpf.a 00:00:55.088 [261/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:00:55.088 [262/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:00:55.088 [263/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.088 [264/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:00:55.088 [265/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:00:55.088 [266/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.088 [267/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:00:55.088 [268/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.088 [269/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.088 [270/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:00:55.088 [271/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.088 [272/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:00:55.088 [273/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:00:55.088 [274/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.088 [275/707] Linking static target lib/librte_security.a 00:00:55.088 [276/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:00:55.088 [277/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:00:55.088 [278/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:00:55.088 [279/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:00:55.088 [280/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:00:55.352 [281/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:00:55.352 [282/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:00:55.352 [283/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.352 [284/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.352 [285/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.352 [286/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:00:55.352 [287/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:00:55.352 [288/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:00:55.352 [289/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.352 [290/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:00:55.352 [291/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:00:55.352 [292/707] Linking static target lib/librte_rib.a 00:00:55.352 [293/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.352 [294/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.352 [295/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:00:55.352 [296/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:00:55.352 [297/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:00:55.352 [298/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.352 [299/707] Linking static target lib/librte_lpm.a 00:00:55.352 [300/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:00:55.352 [301/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:00:55.352 [302/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:00:55.352 [303/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.352 [304/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.612 [305/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:00:55.612 [306/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:00:55.612 [307/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.612 [308/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:00:55.612 [309/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.612 [310/707] Linking target lib/librte_telemetry.so.24.0 00:00:55.612 [311/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:00:55.612 [312/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:00:55.612 [313/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:00:55.612 [314/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:00:55.612 [315/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:00:55.612 [316/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:00:55.612 [317/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:00:55.613 [318/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:00:55.613 [319/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.613 [320/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:00:55.613 [321/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:00:55.613 [322/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:00:55.613 [323/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:00:55.613 [324/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:00:55.613 [325/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:00:55.613 [326/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:00:55.613 [327/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:00:55.613 [328/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:00:55.613 [329/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:00:55.613 [330/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:00:55.613 [331/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:00:55.613 [332/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:00:55.613 [333/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:00:55.613 [334/707] Linking static target lib/librte_efd.a 00:00:55.879 [335/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:00:55.879 [336/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:00:55.879 [337/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:00:55.879 [338/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:00:55.879 [339/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:00:55.879 [340/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.879 [341/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.879 [342/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:00:55.879 [343/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:00:55.879 [344/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:00:55.879 [345/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:00:55.879 [346/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:00:55.879 [347/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:00:55.879 [348/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:00:55.879 [349/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:00:55.879 [350/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:00:55.879 [351/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:00:55.879 [352/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:00:55.879 [353/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:00:56.144 [354/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:00:56.144 [355/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.144 [356/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:00:56.144 [357/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:00:56.144 [358/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:00:56.144 [359/707] Linking static target lib/librte_fib.a 00:00:56.144 [360/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.144 [361/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:00:56.144 [362/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:00:56.144 [363/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:00:56.144 [364/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:00:56.144 [365/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.145 [366/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:00:56.145 [367/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:00:56.145 [368/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:00:56.145 [369/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.145 [370/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:00:56.145 [371/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:00:56.145 [372/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.145 [373/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:00:56.145 [374/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:00:56.145 [375/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:00:56.145 [376/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:00:56.145 [377/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:00:56.145 [378/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.145 [379/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:00:56.403 [380/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:00:56.403 [381/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:00:56.403 [382/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:00:56.403 [383/707] Linking static target lib/librte_pdump.a 00:00:56.403 [384/707] Linking static target lib/librte_graph.a 00:00:56.403 [385/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:00:56.403 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:00:56.403 [387/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:00:56.403 [388/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:00:56.403 [389/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:00:56.403 [390/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:00:56.403 [391/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:00:56.403 [392/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:00:56.403 [393/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:00:56.403 [394/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:00:56.403 [395/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:00:56.403 [396/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:00:56.403 [397/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:00:56.403 [398/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:00:56.403 [399/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:00:56.403 [400/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:00:56.403 [401/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:00:56.403 [402/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:00:56.403 [403/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:00:56.403 [404/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:00:56.403 [405/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:00:56.403 [406/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:00:56.403 [407/707] Linking static target drivers/librte_bus_vdev.a 00:00:56.403 [408/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:00:56.668 [409/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:00:56.668 [410/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:00:56.668 [411/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:00:56.668 [412/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:00:56.668 [413/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:00:56.668 [414/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:00:56.668 [415/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:00:56.668 [416/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:00:56.668 [417/707] Linking static target lib/librte_table.a 00:00:56.668 [418/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:00:56.668 [419/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.668 [420/707] Linking static target lib/librte_sched.a 00:00:56.668 [421/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:00:56.668 [422/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:00:56.668 [423/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:00:56.668 [424/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:00:56.668 [425/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.668 [426/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:00:56.668 [427/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:00:56.668 [428/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:00:56.668 [429/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:00:56.668 [430/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:00:56.932 [431/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:00:56.932 [432/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:00:56.932 [433/707] Linking static target lib/librte_cryptodev.a 00:00:56.932 [434/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:00:56.932 [435/707] Linking static target drivers/librte_bus_pci.a 00:00:56.932 [436/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:00:56.932 [437/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:00:56.932 [438/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:00:56.932 [439/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:00:56.932 [440/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:00:56.932 [441/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:00:56.932 [442/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:00:56.932 [443/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:00:56.932 [444/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:00:56.932 [445/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:00:56.932 [446/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.932 [447/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:00:56.932 [448/707] Linking static target lib/librte_ipsec.a 00:00:56.932 [449/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:00:56.932 [450/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:00:56.932 [451/707] Linking static target lib/librte_member.a 00:00:56.932 [452/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:00:56.932 [453/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:00:56.932 [454/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:00:56.932 [455/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:00:57.211 [456/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:00:57.211 [457/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:00:57.211 [458/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:00:57.211 [459/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:00:57.211 [460/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:00:57.211 [461/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:00:57.211 [462/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.211 [463/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:00:57.211 [464/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:00:57.211 [465/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:00:57.211 [466/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:00:57.211 [467/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:00:57.211 [468/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:00:57.211 [469/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:00:57.211 [470/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:00:57.211 [471/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:00:57.211 [472/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:00:57.211 [473/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:00:57.211 [474/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:00:57.211 [475/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:00:57.211 [476/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:00:57.211 [477/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:00:57.211 [478/707] Linking static target lib/librte_pdcp.a 00:00:57.211 [479/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:00:57.211 [480/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:00:57.211 [481/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:00:57.211 [482/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.211 [483/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:00:57.211 [484/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:00:57.211 [485/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:00:57.211 [486/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:00:57.469 [487/707] Linking static target lib/librte_node.a 00:00:57.469 [488/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:00:57.469 [489/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:00:57.469 [490/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:00:57.469 [491/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.469 [492/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:00:57.469 [493/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:00:57.469 [494/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:00:57.469 [495/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:00:57.469 [496/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:00:57.469 [497/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:00:57.469 [498/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:00:57.469 [499/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:00:57.469 [500/707] Linking static target lib/librte_hash.a 00:00:57.469 [501/707] Linking static target drivers/librte_mempool_ring.a 00:00:57.469 [502/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:00:57.469 [503/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:00:57.469 [504/707] Linking static target lib/librte_port.a 00:00:57.469 [505/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.469 [506/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.469 [507/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:00:57.469 [508/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:00:57.469 [509/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:00:57.469 [510/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.469 [511/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:00:57.469 [512/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:00:57.469 [513/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:00:57.469 [514/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:00:57.469 [515/707] Linking static target lib/acl/libavx2_tmp.a 00:00:57.469 [516/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:00:57.469 [517/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:00:57.469 [518/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:00:57.469 [519/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:00:57.469 [520/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:00:57.469 [521/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:00:57.469 [522/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:00:57.469 [523/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.727 [524/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:00:57.727 [525/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:00:57.727 [526/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:00:57.727 [527/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:00:57.727 [528/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:00:57.727 [529/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.727 [530/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:00:57.727 [531/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:00:57.727 [532/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:00:57.727 [533/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.727 [534/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:00:57.727 [535/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:00:57.727 [536/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:00:57.727 [537/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:00:57.727 [538/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:00:57.727 [539/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:00:57.727 [540/707] Linking static target lib/librte_acl.a 00:00:57.727 [541/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:00:57.727 [542/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:00:57.727 [543/707] Linking static target lib/librte_eventdev.a 00:00:57.727 [544/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:00:57.727 [545/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:00:57.727 [546/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:00:57.727 [547/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:00:57.984 [548/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:00:57.984 [549/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:00:57.984 [550/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:00:57.984 [551/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:00:57.984 [552/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:00:57.984 [553/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:00:57.984 [554/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:00:57.984 [555/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:00:57.984 [556/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:00:57.984 [557/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.984 [558/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:00:57.984 [559/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:00:57.984 [560/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:00:57.984 [561/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:00:57.984 [562/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:00:58.242 [563/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.242 [564/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:00:58.242 [565/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:00:58.242 [566/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.242 [567/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:00:58.242 [568/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:00:58.242 [569/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:00:58.499 [570/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:00:58.499 [571/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.499 [572/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:00:58.499 [573/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:00:58.757 [574/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:00:58.757 [575/707] Linking static target lib/librte_ethdev.a 00:00:59.014 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:00:59.014 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:00:59.272 [578/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:00:59.530 [579/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:00:59.530 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:00.096 [581/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:00.096 [582/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:00.097 [583/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:00.356 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:00.356 [585/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:00.356 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:00.356 [587/707] Linking static target drivers/librte_net_i40e.a 00:01:00.616 [588/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:01.182 [589/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:01.440 [590/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.440 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.005 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:07.273 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.273 [594/707] Linking target lib/librte_eal.so.24.0 00:01:07.273 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:07.273 [596/707] Linking target lib/librte_timer.so.24.0 00:01:07.273 [597/707] Linking target lib/librte_pci.so.24.0 00:01:07.273 [598/707] Linking target lib/librte_dmadev.so.24.0 00:01:07.273 [599/707] Linking target lib/librte_ring.so.24.0 00:01:07.273 [600/707] Linking target lib/librte_meter.so.24.0 00:01:07.273 [601/707] Linking target lib/librte_cfgfile.so.24.0 00:01:07.273 [602/707] Linking target lib/librte_jobstats.so.24.0 00:01:07.273 [603/707] Linking target lib/librte_stack.so.24.0 00:01:07.273 [604/707] Linking target lib/librte_rawdev.so.24.0 00:01:07.273 [605/707] Linking target drivers/librte_bus_vdev.so.24.0 00:01:07.273 [606/707] Linking target lib/librte_acl.so.24.0 00:01:07.273 [607/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:07.273 [608/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:07.273 [609/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:07.273 [610/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:07.273 [611/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:07.273 [612/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:07.273 [613/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:07.273 [614/707] Linking target drivers/librte_bus_pci.so.24.0 00:01:07.273 [615/707] Linking target lib/librte_rcu.so.24.0 00:01:07.273 [616/707] Linking target lib/librte_mempool.so.24.0 00:01:07.273 [617/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:07.273 [618/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:07.273 [619/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:07.273 [620/707] Linking target lib/librte_rib.so.24.0 00:01:07.273 [621/707] Linking target drivers/librte_mempool_ring.so.24.0 00:01:07.273 [622/707] Linking target lib/librte_mbuf.so.24.0 00:01:07.273 [623/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:07.273 [624/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:07.273 [625/707] Linking target lib/librte_fib.so.24.0 00:01:07.531 [626/707] Linking target lib/librte_distributor.so.24.0 00:01:07.531 [627/707] Linking target lib/librte_net.so.24.0 00:01:07.531 [628/707] Linking target lib/librte_compressdev.so.24.0 00:01:07.531 [629/707] Linking target lib/librte_bbdev.so.24.0 00:01:07.531 [630/707] Linking target lib/librte_reorder.so.24.0 00:01:07.531 [631/707] Linking target lib/librte_gpudev.so.24.0 00:01:07.531 [632/707] Linking target lib/librte_regexdev.so.24.0 00:01:07.531 [633/707] Linking target lib/librte_mldev.so.24.0 00:01:07.531 [634/707] Linking target lib/librte_sched.so.24.0 00:01:07.531 [635/707] Linking target lib/librte_cryptodev.so.24.0 00:01:07.531 [636/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.531 [637/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:07.531 [638/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:07.531 [639/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:07.531 [640/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:07.531 [641/707] Linking target lib/librte_hash.so.24.0 00:01:07.531 [642/707] Linking target lib/librte_cmdline.so.24.0 00:01:07.531 [643/707] Linking target lib/librte_ethdev.so.24.0 00:01:07.531 [644/707] Linking target lib/librte_security.so.24.0 00:01:07.789 [645/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:07.789 [646/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:07.789 [647/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:07.789 [648/707] Linking static target lib/librte_pipeline.a 00:01:07.789 [649/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:07.789 [650/707] Linking target lib/librte_efd.so.24.0 00:01:07.789 [651/707] Linking target lib/librte_gro.so.24.0 00:01:07.789 [652/707] Linking target lib/librte_pcapng.so.24.0 00:01:07.789 [653/707] Linking target lib/librte_lpm.so.24.0 00:01:07.789 [654/707] Linking target lib/librte_metrics.so.24.0 00:01:07.789 [655/707] Linking target lib/librte_member.so.24.0 00:01:07.789 [656/707] Linking target lib/librte_power.so.24.0 00:01:07.789 [657/707] Linking target lib/librte_eventdev.so.24.0 00:01:07.789 [658/707] Linking target lib/librte_gso.so.24.0 00:01:07.789 [659/707] Linking target lib/librte_bpf.so.24.0 00:01:07.789 [660/707] Linking target lib/librte_ip_frag.so.24.0 00:01:07.789 [661/707] Linking target lib/librte_pdcp.so.24.0 00:01:07.789 [662/707] Linking target lib/librte_ipsec.so.24.0 00:01:07.789 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:01:08.047 [664/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:08.047 [665/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:08.047 [666/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:08.047 [667/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:08.047 [668/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:08.047 [669/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:08.048 [670/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:08.048 [671/707] Linking target lib/librte_bitratestats.so.24.0 00:01:08.048 [672/707] Linking target lib/librte_latencystats.so.24.0 00:01:08.048 [673/707] Linking target lib/librte_dispatcher.so.24.0 00:01:08.048 [674/707] Linking target lib/librte_graph.so.24.0 00:01:08.048 [675/707] Linking target lib/librte_pdump.so.24.0 00:01:08.048 [676/707] Linking target lib/librte_port.so.24.0 00:01:08.048 [677/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:08.048 [678/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:08.306 [679/707] Linking target lib/librte_node.so.24.0 00:01:08.306 [680/707] Linking target lib/librte_table.so.24.0 00:01:08.306 [681/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:08.306 [682/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:08.306 [683/707] Linking static target lib/librte_vhost.a 00:01:08.874 [684/707] Linking target app/dpdk-pdump 00:01:08.874 [685/707] Linking target app/dpdk-test-compress-perf 00:01:08.874 [686/707] Linking target app/dpdk-dumpcap 00:01:08.874 [687/707] Linking target app/dpdk-test-dma-perf 00:01:08.874 [688/707] Linking target app/dpdk-test-acl 00:01:08.874 [689/707] Linking target app/dpdk-proc-info 00:01:08.874 [690/707] Linking target app/dpdk-graph 00:01:08.874 [691/707] Linking target app/dpdk-test-mldev 00:01:08.874 [692/707] Linking target app/dpdk-test-gpudev 00:01:08.874 [693/707] Linking target app/dpdk-test-regex 00:01:08.874 [694/707] Linking target app/dpdk-test-fib 00:01:08.874 [695/707] Linking target app/dpdk-test-sad 00:01:08.874 [696/707] Linking target app/dpdk-test-cmdline 00:01:08.874 [697/707] Linking target app/dpdk-test-security-perf 00:01:08.874 [698/707] Linking target app/dpdk-test-pipeline 00:01:08.874 [699/707] Linking target app/dpdk-test-crypto-perf 00:01:08.874 [700/707] Linking target app/dpdk-test-flow-perf 00:01:08.874 [701/707] Linking target app/dpdk-test-bbdev 00:01:08.874 [702/707] Linking target app/dpdk-test-eventdev 00:01:08.874 [703/707] Linking target app/dpdk-testpmd 00:01:10.778 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.778 [705/707] Linking target lib/librte_vhost.so.24.0 00:01:13.308 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.309 [707/707] Linking target lib/librte_pipeline.so.24.0 00:01:13.309 11:24:42 -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 install 00:01:13.309 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:01:13.309 [0/1] Installing files. 00:01:13.572 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:13.572 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:13.573 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.574 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:13.575 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:13.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:13.576 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.576 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:13.577 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:13.577 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:13.577 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:13.577 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:13.577 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.577 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:01:13.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:01:13.845 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_log.so.24 00:01:13.845 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_log.so 00:01:13.845 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:01:13.845 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:13.845 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:01:13.845 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:13.845 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:01:13.845 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:13.845 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:01:13.845 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:13.845 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:01:13.845 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:13.845 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:01:13.845 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:13.845 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:01:13.845 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:13.845 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.24 00:01:13.845 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:01:13.845 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:01:13.845 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:13.845 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:01:13.845 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:13.845 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:01:13.845 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:13.845 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:01:13.845 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:13.845 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:01:13.845 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:13.845 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:01:13.845 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:13.845 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:01:13.845 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:13.845 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:01:13.846 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:13.846 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:01:13.846 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:13.846 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:01:13.846 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:13.846 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:01:13.846 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:13.846 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:01:13.846 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:13.846 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:01:13.846 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:13.846 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:01:13.846 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:13.846 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:01:13.846 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:13.846 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:01:13.846 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:13.846 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:01:13.846 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:13.846 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:01:13.846 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:13.846 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:01:13.846 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:13.846 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:01:13.846 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:13.846 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:01:13.846 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:13.846 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:01:13.846 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:13.846 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:01:13.846 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:13.846 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:01:13.846 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:01:13.846 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:01:13.846 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:01:13.846 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:01:13.846 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:01:13.846 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:01:13.846 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:01:13.846 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:01:13.846 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:01:13.846 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:01:13.846 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:01:13.846 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:01:13.846 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:13.846 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:01:13.846 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:13.846 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:01:13.846 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:13.846 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.24 00:01:13.846 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:01:13.846 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:01:13.846 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:13.846 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.24 00:01:13.846 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:01:13.846 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:01:13.846 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:13.846 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:01:13.846 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:13.846 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:01:13.846 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:13.846 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:01:13.846 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:13.846 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:01:13.846 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:13.846 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:01:13.846 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:13.846 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.24 00:01:13.846 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:01:13.846 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:01:13.846 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:13.846 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:01:13.846 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:13.846 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:01:13.846 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:13.846 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:01:13.846 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:13.846 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:01:13.846 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:13.846 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.24 00:01:13.846 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:01:13.846 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:01:13.846 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:13.846 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.24 00:01:13.846 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:01:13.846 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:01:13.847 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:13.847 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:01:13.847 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:13.847 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.24 00:01:13.847 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:01:13.847 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:01:13.847 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:01:13.847 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:01:13.847 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:01:13.847 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:01:13.847 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:01:13.847 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:01:13.847 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:01:13.847 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:01:13.847 11:24:43 -- common/autobuild_common.sh@189 -- $ uname -s 00:01:13.847 11:24:43 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:13.847 11:24:43 -- common/autobuild_common.sh@200 -- $ cat 00:01:13.847 11:24:43 -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:13.847 00:01:13.847 real 0m27.324s 00:01:13.847 user 8m3.090s 00:01:13.847 sys 2m38.391s 00:01:13.847 11:24:43 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:13.847 11:24:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.847 ************************************ 00:01:13.847 END TEST build_native_dpdk 00:01:13.847 ************************************ 00:01:14.148 11:24:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:14.148 11:24:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:14.148 11:24:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:14.148 11:24:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:14.148 11:24:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:14.148 11:24:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:14.148 11:24:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:14.148 11:24:43 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:01:14.148 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:14.407 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:14.407 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:14.407 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:14.666 Using 'verbs' RDMA provider 00:01:30.141 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:42.402 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:42.402 Creating mk/config.mk...done. 00:01:42.402 Creating mk/cc.flags.mk...done. 00:01:42.402 Type 'make' to build. 00:01:42.402 11:25:11 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:42.402 11:25:11 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:42.402 11:25:11 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:42.402 11:25:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.402 ************************************ 00:01:42.402 START TEST make 00:01:42.402 ************************************ 00:01:42.402 11:25:11 -- common/autotest_common.sh@1104 -- $ make -j112 00:01:42.402 make[1]: Nothing to be done for 'all'. 00:01:52.385 CC lib/log/log.o 00:01:52.385 CC lib/log/log_flags.o 00:01:52.385 CC lib/log/log_deprecated.o 00:01:52.385 CC lib/ut_mock/mock.o 00:01:52.385 CC lib/ut/ut.o 00:01:52.385 LIB libspdk_ut_mock.a 00:01:52.385 LIB libspdk_log.a 00:01:52.385 SO libspdk_ut_mock.so.5.0 00:01:52.385 LIB libspdk_ut.a 00:01:52.385 SO libspdk_log.so.6.1 00:01:52.385 SO libspdk_ut.so.1.0 00:01:52.385 SYMLINK libspdk_ut_mock.so 00:01:52.385 SYMLINK libspdk_log.so 00:01:52.385 SYMLINK libspdk_ut.so 00:01:52.385 CXX lib/trace_parser/trace.o 00:01:52.385 CC lib/dma/dma.o 00:01:52.385 CC lib/ioat/ioat.o 00:01:52.385 CC lib/util/base64.o 00:01:52.385 CC lib/util/bit_array.o 00:01:52.385 CC lib/util/cpuset.o 00:01:52.385 CC lib/util/crc16.o 00:01:52.385 CC lib/util/crc32.o 00:01:52.385 CC lib/util/crc32c.o 00:01:52.385 CC lib/util/crc32_ieee.o 00:01:52.385 CC lib/util/crc64.o 00:01:52.385 CC lib/util/dif.o 00:01:52.385 CC lib/util/hexlify.o 00:01:52.385 CC lib/util/fd.o 00:01:52.385 CC lib/util/file.o 00:01:52.385 CC lib/util/iov.o 00:01:52.385 CC lib/util/math.o 00:01:52.385 CC lib/util/pipe.o 00:01:52.385 CC lib/util/strerror_tls.o 00:01:52.385 CC lib/util/string.o 00:01:52.385 CC lib/util/uuid.o 00:01:52.385 CC lib/util/fd_group.o 00:01:52.385 CC lib/util/xor.o 00:01:52.385 CC lib/util/zipf.o 00:01:52.658 CC lib/vfio_user/host/vfio_user_pci.o 00:01:52.658 CC lib/vfio_user/host/vfio_user.o 00:01:52.658 LIB libspdk_dma.a 00:01:52.658 SO libspdk_dma.so.3.0 00:01:52.658 SYMLINK libspdk_dma.so 00:01:52.658 LIB libspdk_ioat.a 00:01:52.658 SO libspdk_ioat.so.6.0 00:01:52.658 LIB libspdk_vfio_user.a 00:01:52.658 SYMLINK libspdk_ioat.so 00:01:52.658 SO libspdk_vfio_user.so.4.0 00:01:52.916 SYMLINK libspdk_vfio_user.so 00:01:52.916 LIB libspdk_util.a 00:01:52.916 SO libspdk_util.so.8.0 00:01:53.175 SYMLINK libspdk_util.so 00:01:53.175 LIB libspdk_trace_parser.a 00:01:53.175 SO libspdk_trace_parser.so.4.0 00:01:53.175 SYMLINK libspdk_trace_parser.so 00:01:53.175 CC lib/rdma/common.o 00:01:53.175 CC lib/json/json_parse.o 00:01:53.175 CC lib/idxd/idxd.o 00:01:53.175 CC lib/rdma/rdma_verbs.o 00:01:53.175 CC lib/json/json_util.o 00:01:53.175 CC lib/json/json_write.o 00:01:53.175 CC lib/idxd/idxd_user.o 00:01:53.175 CC lib/idxd/idxd_kernel.o 00:01:53.433 CC lib/vmd/vmd.o 00:01:53.433 CC lib/conf/conf.o 00:01:53.433 CC lib/vmd/led.o 00:01:53.433 CC lib/env_dpdk/env.o 00:01:53.433 CC lib/env_dpdk/memory.o 00:01:53.433 CC lib/env_dpdk/pci.o 00:01:53.433 CC lib/env_dpdk/init.o 00:01:53.433 CC lib/env_dpdk/threads.o 00:01:53.433 CC lib/env_dpdk/pci_vmd.o 00:01:53.433 CC lib/env_dpdk/pci_ioat.o 00:01:53.433 CC lib/env_dpdk/pci_virtio.o 00:01:53.433 CC lib/env_dpdk/pci_idxd.o 00:01:53.433 CC lib/env_dpdk/pci_event.o 00:01:53.433 CC lib/env_dpdk/sigbus_handler.o 00:01:53.433 CC lib/env_dpdk/pci_dpdk.o 00:01:53.433 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:53.433 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:53.433 LIB libspdk_rdma.a 00:01:53.433 LIB libspdk_conf.a 00:01:53.692 LIB libspdk_json.a 00:01:53.692 SO libspdk_rdma.so.5.0 00:01:53.692 SO libspdk_conf.so.5.0 00:01:53.692 SO libspdk_json.so.5.1 00:01:53.692 SYMLINK libspdk_rdma.so 00:01:53.692 SYMLINK libspdk_conf.so 00:01:53.692 SYMLINK libspdk_json.so 00:01:53.692 LIB libspdk_idxd.a 00:01:53.692 SO libspdk_idxd.so.11.0 00:01:53.692 LIB libspdk_vmd.a 00:01:53.953 SYMLINK libspdk_idxd.so 00:01:53.953 SO libspdk_vmd.so.5.0 00:01:53.953 SYMLINK libspdk_vmd.so 00:01:53.953 CC lib/jsonrpc/jsonrpc_server.o 00:01:53.953 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:53.953 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:53.953 CC lib/jsonrpc/jsonrpc_client.o 00:01:54.241 LIB libspdk_jsonrpc.a 00:01:54.241 SO libspdk_jsonrpc.so.5.1 00:01:54.241 LIB libspdk_env_dpdk.a 00:01:54.241 SYMLINK libspdk_jsonrpc.so 00:01:54.241 SO libspdk_env_dpdk.so.13.0 00:01:54.499 SYMLINK libspdk_env_dpdk.so 00:01:54.499 CC lib/rpc/rpc.o 00:01:54.758 LIB libspdk_rpc.a 00:01:54.758 SO libspdk_rpc.so.5.0 00:01:54.758 SYMLINK libspdk_rpc.so 00:01:55.015 CC lib/trace/trace.o 00:01:55.015 CC lib/notify/notify_rpc.o 00:01:55.015 CC lib/notify/notify.o 00:01:55.015 CC lib/trace/trace_flags.o 00:01:55.015 CC lib/trace/trace_rpc.o 00:01:55.015 CC lib/sock/sock.o 00:01:55.015 CC lib/sock/sock_rpc.o 00:01:55.273 LIB libspdk_notify.a 00:01:55.273 SO libspdk_notify.so.5.0 00:01:55.273 LIB libspdk_trace.a 00:01:55.273 SO libspdk_trace.so.9.0 00:01:55.273 SYMLINK libspdk_notify.so 00:01:55.273 SYMLINK libspdk_trace.so 00:01:55.273 LIB libspdk_sock.a 00:01:55.273 SO libspdk_sock.so.8.0 00:01:55.531 SYMLINK libspdk_sock.so 00:01:55.531 CC lib/thread/thread.o 00:01:55.531 CC lib/thread/iobuf.o 00:01:55.789 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:55.790 CC lib/nvme/nvme_ctrlr.o 00:01:55.790 CC lib/nvme/nvme_fabric.o 00:01:55.790 CC lib/nvme/nvme_ns_cmd.o 00:01:55.790 CC lib/nvme/nvme_pcie.o 00:01:55.790 CC lib/nvme/nvme_ns.o 00:01:55.790 CC lib/nvme/nvme_qpair.o 00:01:55.790 CC lib/nvme/nvme_pcie_common.o 00:01:55.790 CC lib/nvme/nvme.o 00:01:55.790 CC lib/nvme/nvme_quirks.o 00:01:55.790 CC lib/nvme/nvme_transport.o 00:01:55.790 CC lib/nvme/nvme_discovery.o 00:01:55.790 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:55.790 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:55.790 CC lib/nvme/nvme_tcp.o 00:01:55.790 CC lib/nvme/nvme_opal.o 00:01:55.790 CC lib/nvme/nvme_io_msg.o 00:01:55.790 CC lib/nvme/nvme_poll_group.o 00:01:55.790 CC lib/nvme/nvme_zns.o 00:01:55.790 CC lib/nvme/nvme_cuse.o 00:01:55.790 CC lib/nvme/nvme_vfio_user.o 00:01:55.790 CC lib/nvme/nvme_rdma.o 00:01:56.721 LIB libspdk_thread.a 00:01:56.721 SO libspdk_thread.so.9.0 00:01:56.721 SYMLINK libspdk_thread.so 00:01:56.978 CC lib/blob/blobstore.o 00:01:56.978 CC lib/blob/request.o 00:01:56.978 CC lib/blob/zeroes.o 00:01:56.978 CC lib/blob/blob_bs_dev.o 00:01:56.978 CC lib/accel/accel_rpc.o 00:01:56.978 CC lib/accel/accel.o 00:01:56.978 CC lib/accel/accel_sw.o 00:01:56.978 CC lib/virtio/virtio.o 00:01:56.978 CC lib/virtio/virtio_vfio_user.o 00:01:56.978 CC lib/virtio/virtio_vhost_user.o 00:01:56.978 CC lib/virtio/virtio_pci.o 00:01:56.978 CC lib/init/subsystem_rpc.o 00:01:56.978 CC lib/init/json_config.o 00:01:56.978 CC lib/init/subsystem.o 00:01:56.978 CC lib/init/rpc.o 00:01:57.235 LIB libspdk_nvme.a 00:01:57.235 LIB libspdk_init.a 00:01:57.235 LIB libspdk_virtio.a 00:01:57.235 SO libspdk_init.so.4.0 00:01:57.235 SO libspdk_nvme.so.12.0 00:01:57.235 SO libspdk_virtio.so.6.0 00:01:57.235 SYMLINK libspdk_init.so 00:01:57.492 SYMLINK libspdk_virtio.so 00:01:57.492 SYMLINK libspdk_nvme.so 00:01:57.492 CC lib/event/app.o 00:01:57.492 CC lib/event/reactor.o 00:01:57.492 CC lib/event/log_rpc.o 00:01:57.492 CC lib/event/scheduler_static.o 00:01:57.492 CC lib/event/app_rpc.o 00:01:57.750 LIB libspdk_accel.a 00:01:57.750 SO libspdk_accel.so.14.0 00:01:57.750 SYMLINK libspdk_accel.so 00:01:58.008 LIB libspdk_event.a 00:01:58.008 SO libspdk_event.so.12.0 00:01:58.008 CC lib/bdev/bdev_rpc.o 00:01:58.008 CC lib/bdev/bdev_zone.o 00:01:58.008 CC lib/bdev/part.o 00:01:58.008 CC lib/bdev/bdev.o 00:01:58.008 CC lib/bdev/scsi_nvme.o 00:01:58.008 SYMLINK libspdk_event.so 00:01:58.943 LIB libspdk_blob.a 00:01:58.943 SO libspdk_blob.so.10.1 00:01:58.943 SYMLINK libspdk_blob.so 00:01:59.201 CC lib/blobfs/blobfs.o 00:01:59.201 CC lib/blobfs/tree.o 00:01:59.201 CC lib/lvol/lvol.o 00:01:59.769 LIB libspdk_bdev.a 00:01:59.769 LIB libspdk_blobfs.a 00:01:59.769 SO libspdk_blobfs.so.9.0 00:01:59.769 SO libspdk_bdev.so.14.0 00:01:59.769 LIB libspdk_lvol.a 00:02:00.028 SO libspdk_lvol.so.9.1 00:02:00.028 SYMLINK libspdk_blobfs.so 00:02:00.028 SYMLINK libspdk_bdev.so 00:02:00.028 SYMLINK libspdk_lvol.so 00:02:00.288 CC lib/nbd/nbd.o 00:02:00.288 CC lib/ublk/ublk.o 00:02:00.288 CC lib/nbd/nbd_rpc.o 00:02:00.288 CC lib/ublk/ublk_rpc.o 00:02:00.288 CC lib/nvmf/ctrlr.o 00:02:00.288 CC lib/nvmf/ctrlr_discovery.o 00:02:00.288 CC lib/nvmf/ctrlr_bdev.o 00:02:00.288 CC lib/nvmf/nvmf.o 00:02:00.288 CC lib/nvmf/subsystem.o 00:02:00.288 CC lib/nvmf/nvmf_rpc.o 00:02:00.288 CC lib/nvmf/rdma.o 00:02:00.288 CC lib/nvmf/transport.o 00:02:00.288 CC lib/nvmf/tcp.o 00:02:00.288 CC lib/ftl/ftl_core.o 00:02:00.288 CC lib/ftl/ftl_layout.o 00:02:00.288 CC lib/scsi/dev.o 00:02:00.288 CC lib/scsi/scsi.o 00:02:00.288 CC lib/ftl/ftl_init.o 00:02:00.288 CC lib/scsi/lun.o 00:02:00.288 CC lib/scsi/scsi_bdev.o 00:02:00.288 CC lib/scsi/port.o 00:02:00.288 CC lib/scsi/scsi_pr.o 00:02:00.288 CC lib/ftl/ftl_debug.o 00:02:00.288 CC lib/ftl/ftl_io.o 00:02:00.288 CC lib/ftl/ftl_l2p.o 00:02:00.288 CC lib/scsi/task.o 00:02:00.288 CC lib/ftl/ftl_sb.o 00:02:00.288 CC lib/scsi/scsi_rpc.o 00:02:00.288 CC lib/ftl/ftl_l2p_flat.o 00:02:00.288 CC lib/ftl/ftl_nv_cache.o 00:02:00.288 CC lib/ftl/ftl_band.o 00:02:00.288 CC lib/ftl/ftl_band_ops.o 00:02:00.288 CC lib/ftl/ftl_writer.o 00:02:00.288 CC lib/ftl/ftl_rq.o 00:02:00.288 CC lib/ftl/ftl_reloc.o 00:02:00.288 CC lib/ftl/ftl_l2p_cache.o 00:02:00.288 CC lib/ftl/ftl_p2l.o 00:02:00.288 CC lib/ftl/mngt/ftl_mngt.o 00:02:00.288 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:00.288 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:00.288 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:00.288 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:00.288 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:00.288 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:00.288 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:00.288 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:00.288 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:00.288 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:00.288 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:00.288 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:00.288 CC lib/ftl/utils/ftl_conf.o 00:02:00.288 CC lib/ftl/utils/ftl_md.o 00:02:00.288 CC lib/ftl/utils/ftl_mempool.o 00:02:00.288 CC lib/ftl/utils/ftl_bitmap.o 00:02:00.288 CC lib/ftl/utils/ftl_property.o 00:02:00.288 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:00.288 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:00.288 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:00.288 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:00.288 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:00.288 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:00.288 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:00.288 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:00.288 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:00.288 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:00.288 CC lib/ftl/base/ftl_base_dev.o 00:02:00.288 CC lib/ftl/ftl_trace.o 00:02:00.288 CC lib/ftl/base/ftl_base_bdev.o 00:02:00.546 LIB libspdk_nbd.a 00:02:00.546 SO libspdk_nbd.so.6.0 00:02:00.805 SYMLINK libspdk_nbd.so 00:02:00.805 LIB libspdk_scsi.a 00:02:00.805 SO libspdk_scsi.so.8.0 00:02:00.805 LIB libspdk_ublk.a 00:02:00.805 SO libspdk_ublk.so.2.0 00:02:00.805 SYMLINK libspdk_scsi.so 00:02:00.805 SYMLINK libspdk_ublk.so 00:02:01.064 CC lib/iscsi/init_grp.o 00:02:01.064 CC lib/iscsi/conn.o 00:02:01.064 CC lib/iscsi/iscsi.o 00:02:01.064 CC lib/iscsi/portal_grp.o 00:02:01.064 CC lib/iscsi/md5.o 00:02:01.064 CC lib/iscsi/tgt_node.o 00:02:01.064 CC lib/iscsi/iscsi_subsystem.o 00:02:01.064 CC lib/iscsi/param.o 00:02:01.064 CC lib/iscsi/iscsi_rpc.o 00:02:01.064 CC lib/vhost/vhost.o 00:02:01.064 CC lib/iscsi/task.o 00:02:01.064 CC lib/vhost/vhost_rpc.o 00:02:01.064 CC lib/vhost/vhost_scsi.o 00:02:01.064 CC lib/vhost/vhost_blk.o 00:02:01.064 CC lib/vhost/rte_vhost_user.o 00:02:01.064 LIB libspdk_ftl.a 00:02:01.322 SO libspdk_ftl.so.8.0 00:02:01.579 SYMLINK libspdk_ftl.so 00:02:01.836 LIB libspdk_nvmf.a 00:02:01.836 LIB libspdk_vhost.a 00:02:01.836 SO libspdk_nvmf.so.17.0 00:02:01.836 SO libspdk_vhost.so.7.1 00:02:02.093 SYMLINK libspdk_vhost.so 00:02:02.093 LIB libspdk_iscsi.a 00:02:02.093 SYMLINK libspdk_nvmf.so 00:02:02.093 SO libspdk_iscsi.so.7.0 00:02:02.350 SYMLINK libspdk_iscsi.so 00:02:02.608 CC module/env_dpdk/env_dpdk_rpc.o 00:02:02.608 CC module/accel/dsa/accel_dsa.o 00:02:02.608 CC module/accel/error/accel_error.o 00:02:02.608 CC module/accel/dsa/accel_dsa_rpc.o 00:02:02.608 CC module/accel/error/accel_error_rpc.o 00:02:02.608 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:02.608 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:02.608 CC module/scheduler/gscheduler/gscheduler.o 00:02:02.608 CC module/accel/iaa/accel_iaa.o 00:02:02.608 CC module/accel/iaa/accel_iaa_rpc.o 00:02:02.608 CC module/accel/ioat/accel_ioat.o 00:02:02.608 CC module/accel/ioat/accel_ioat_rpc.o 00:02:02.865 CC module/blob/bdev/blob_bdev.o 00:02:02.865 CC module/sock/posix/posix.o 00:02:02.865 LIB libspdk_env_dpdk_rpc.a 00:02:02.865 SO libspdk_env_dpdk_rpc.so.5.0 00:02:02.865 SYMLINK libspdk_env_dpdk_rpc.so 00:02:02.865 LIB libspdk_scheduler_gscheduler.a 00:02:02.865 LIB libspdk_accel_error.a 00:02:02.865 LIB libspdk_scheduler_dpdk_governor.a 00:02:02.865 LIB libspdk_scheduler_dynamic.a 00:02:02.865 LIB libspdk_accel_dsa.a 00:02:02.865 SO libspdk_scheduler_gscheduler.so.3.0 00:02:02.865 LIB libspdk_accel_iaa.a 00:02:02.865 LIB libspdk_accel_ioat.a 00:02:02.865 SO libspdk_accel_error.so.1.0 00:02:02.865 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:02.865 SO libspdk_accel_dsa.so.4.0 00:02:02.865 SO libspdk_scheduler_dynamic.so.3.0 00:02:02.865 SO libspdk_accel_iaa.so.2.0 00:02:02.865 SO libspdk_accel_ioat.so.5.0 00:02:02.865 SYMLINK libspdk_scheduler_gscheduler.so 00:02:02.865 LIB libspdk_blob_bdev.a 00:02:02.865 SYMLINK libspdk_accel_error.so 00:02:02.865 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:03.124 SYMLINK libspdk_accel_dsa.so 00:02:03.124 SYMLINK libspdk_scheduler_dynamic.so 00:02:03.124 SYMLINK libspdk_accel_ioat.so 00:02:03.124 SYMLINK libspdk_accel_iaa.so 00:02:03.124 SO libspdk_blob_bdev.so.10.1 00:02:03.124 SYMLINK libspdk_blob_bdev.so 00:02:03.381 LIB libspdk_sock_posix.a 00:02:03.381 SO libspdk_sock_posix.so.5.0 00:02:03.381 SYMLINK libspdk_sock_posix.so 00:02:03.381 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:03.381 CC module/blobfs/bdev/blobfs_bdev.o 00:02:03.381 CC module/bdev/raid/bdev_raid.o 00:02:03.381 CC module/bdev/raid/bdev_raid_sb.o 00:02:03.381 CC module/bdev/raid/raid0.o 00:02:03.381 CC module/bdev/raid/raid1.o 00:02:03.381 CC module/bdev/raid/bdev_raid_rpc.o 00:02:03.381 CC module/bdev/raid/concat.o 00:02:03.381 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:03.381 CC module/bdev/malloc/bdev_malloc.o 00:02:03.381 CC module/bdev/ftl/bdev_ftl.o 00:02:03.381 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:03.381 CC module/bdev/nvme/bdev_nvme.o 00:02:03.381 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:03.381 CC module/bdev/nvme/nvme_rpc.o 00:02:03.381 CC module/bdev/nvme/bdev_mdns_client.o 00:02:03.381 CC module/bdev/nvme/vbdev_opal.o 00:02:03.381 CC module/bdev/gpt/vbdev_gpt.o 00:02:03.381 CC module/bdev/gpt/gpt.o 00:02:03.381 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:03.381 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:03.381 CC module/bdev/lvol/vbdev_lvol.o 00:02:03.381 CC module/bdev/error/vbdev_error_rpc.o 00:02:03.381 CC module/bdev/error/vbdev_error.o 00:02:03.381 CC module/bdev/null/bdev_null.o 00:02:03.381 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:03.381 CC module/bdev/iscsi/bdev_iscsi.o 00:02:03.381 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:03.382 CC module/bdev/null/bdev_null_rpc.o 00:02:03.382 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:03.382 CC module/bdev/passthru/vbdev_passthru.o 00:02:03.382 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:03.382 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:03.382 CC module/bdev/aio/bdev_aio.o 00:02:03.382 CC module/bdev/aio/bdev_aio_rpc.o 00:02:03.382 CC module/bdev/split/vbdev_split_rpc.o 00:02:03.382 CC module/bdev/split/vbdev_split.o 00:02:03.382 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:03.382 CC module/bdev/delay/vbdev_delay.o 00:02:03.382 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:03.382 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:03.382 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:03.639 LIB libspdk_blobfs_bdev.a 00:02:03.639 SO libspdk_blobfs_bdev.so.5.0 00:02:03.639 LIB libspdk_bdev_ftl.a 00:02:03.639 LIB libspdk_bdev_null.a 00:02:03.639 LIB libspdk_bdev_split.a 00:02:03.639 LIB libspdk_bdev_gpt.a 00:02:03.639 SYMLINK libspdk_blobfs_bdev.so 00:02:03.639 LIB libspdk_bdev_error.a 00:02:03.639 SO libspdk_bdev_ftl.so.5.0 00:02:03.895 SO libspdk_bdev_null.so.5.0 00:02:03.895 SO libspdk_bdev_gpt.so.5.0 00:02:03.895 LIB libspdk_bdev_passthru.a 00:02:03.895 LIB libspdk_bdev_aio.a 00:02:03.895 SO libspdk_bdev_split.so.5.0 00:02:03.895 LIB libspdk_bdev_malloc.a 00:02:03.895 SO libspdk_bdev_error.so.5.0 00:02:03.895 LIB libspdk_bdev_zone_block.a 00:02:03.895 SO libspdk_bdev_passthru.so.5.0 00:02:03.895 SO libspdk_bdev_aio.so.5.0 00:02:03.895 LIB libspdk_bdev_iscsi.a 00:02:03.895 SYMLINK libspdk_bdev_ftl.so 00:02:03.895 SO libspdk_bdev_malloc.so.5.0 00:02:03.895 SYMLINK libspdk_bdev_null.so 00:02:03.895 LIB libspdk_bdev_delay.a 00:02:03.895 SYMLINK libspdk_bdev_gpt.so 00:02:03.895 SO libspdk_bdev_zone_block.so.5.0 00:02:03.895 SYMLINK libspdk_bdev_error.so 00:02:03.895 SYMLINK libspdk_bdev_split.so 00:02:03.895 SO libspdk_bdev_iscsi.so.5.0 00:02:03.895 SYMLINK libspdk_bdev_passthru.so 00:02:03.895 SO libspdk_bdev_delay.so.5.0 00:02:03.895 SYMLINK libspdk_bdev_aio.so 00:02:03.895 SYMLINK libspdk_bdev_malloc.so 00:02:03.895 SYMLINK libspdk_bdev_zone_block.so 00:02:03.895 LIB libspdk_bdev_lvol.a 00:02:03.895 SYMLINK libspdk_bdev_iscsi.so 00:02:03.895 LIB libspdk_bdev_virtio.a 00:02:03.895 SYMLINK libspdk_bdev_delay.so 00:02:03.895 SO libspdk_bdev_lvol.so.5.0 00:02:03.895 SO libspdk_bdev_virtio.so.5.0 00:02:04.153 SYMLINK libspdk_bdev_lvol.so 00:02:04.153 SYMLINK libspdk_bdev_virtio.so 00:02:04.153 LIB libspdk_bdev_raid.a 00:02:04.153 SO libspdk_bdev_raid.so.5.0 00:02:04.410 SYMLINK libspdk_bdev_raid.so 00:02:04.975 LIB libspdk_bdev_nvme.a 00:02:04.975 SO libspdk_bdev_nvme.so.6.0 00:02:05.234 SYMLINK libspdk_bdev_nvme.so 00:02:05.492 CC module/event/subsystems/sock/sock.o 00:02:05.492 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:05.750 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:05.750 CC module/event/subsystems/iobuf/iobuf.o 00:02:05.750 CC module/event/subsystems/vmd/vmd.o 00:02:05.750 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:05.750 CC module/event/subsystems/scheduler/scheduler.o 00:02:05.750 LIB libspdk_event_vhost_blk.a 00:02:05.750 LIB libspdk_event_sock.a 00:02:05.750 LIB libspdk_event_scheduler.a 00:02:05.750 LIB libspdk_event_vmd.a 00:02:05.750 SO libspdk_event_vhost_blk.so.2.0 00:02:05.750 LIB libspdk_event_iobuf.a 00:02:05.750 SO libspdk_event_sock.so.4.0 00:02:05.750 SO libspdk_event_scheduler.so.3.0 00:02:05.750 SO libspdk_event_vmd.so.5.0 00:02:05.750 SO libspdk_event_iobuf.so.2.0 00:02:05.750 SYMLINK libspdk_event_vhost_blk.so 00:02:05.750 SYMLINK libspdk_event_sock.so 00:02:06.007 SYMLINK libspdk_event_scheduler.so 00:02:06.008 SYMLINK libspdk_event_vmd.so 00:02:06.008 SYMLINK libspdk_event_iobuf.so 00:02:06.265 CC module/event/subsystems/accel/accel.o 00:02:06.265 LIB libspdk_event_accel.a 00:02:06.265 SO libspdk_event_accel.so.5.0 00:02:06.524 SYMLINK libspdk_event_accel.so 00:02:06.782 CC module/event/subsystems/bdev/bdev.o 00:02:06.782 LIB libspdk_event_bdev.a 00:02:06.782 SO libspdk_event_bdev.so.5.0 00:02:07.113 SYMLINK libspdk_event_bdev.so 00:02:07.113 CC module/event/subsystems/nbd/nbd.o 00:02:07.113 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:07.113 CC module/event/subsystems/ublk/ublk.o 00:02:07.113 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:07.113 CC module/event/subsystems/scsi/scsi.o 00:02:07.371 LIB libspdk_event_nbd.a 00:02:07.371 LIB libspdk_event_ublk.a 00:02:07.371 SO libspdk_event_nbd.so.5.0 00:02:07.371 LIB libspdk_event_scsi.a 00:02:07.371 SO libspdk_event_scsi.so.5.0 00:02:07.371 LIB libspdk_event_nvmf.a 00:02:07.371 SO libspdk_event_ublk.so.2.0 00:02:07.371 SYMLINK libspdk_event_nbd.so 00:02:07.371 SO libspdk_event_nvmf.so.5.0 00:02:07.371 SYMLINK libspdk_event_scsi.so 00:02:07.371 SYMLINK libspdk_event_ublk.so 00:02:07.629 SYMLINK libspdk_event_nvmf.so 00:02:07.629 CC module/event/subsystems/iscsi/iscsi.o 00:02:07.629 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:07.887 LIB libspdk_event_iscsi.a 00:02:07.887 LIB libspdk_event_vhost_scsi.a 00:02:07.887 SO libspdk_event_iscsi.so.5.0 00:02:07.887 SO libspdk_event_vhost_scsi.so.2.0 00:02:07.887 SYMLINK libspdk_event_iscsi.so 00:02:07.887 SYMLINK libspdk_event_vhost_scsi.so 00:02:08.144 SO libspdk.so.5.0 00:02:08.144 SYMLINK libspdk.so 00:02:08.416 CXX app/trace/trace.o 00:02:08.416 CC app/spdk_top/spdk_top.o 00:02:08.416 CC app/spdk_lspci/spdk_lspci.o 00:02:08.416 CC app/trace_record/trace_record.o 00:02:08.416 CC app/spdk_nvme_identify/identify.o 00:02:08.416 CC app/spdk_nvme_perf/perf.o 00:02:08.416 CC test/rpc_client/rpc_client_test.o 00:02:08.416 CC app/spdk_nvme_discover/discovery_aer.o 00:02:08.416 TEST_HEADER include/spdk/accel_module.h 00:02:08.416 TEST_HEADER include/spdk/accel.h 00:02:08.416 TEST_HEADER include/spdk/assert.h 00:02:08.416 TEST_HEADER include/spdk/barrier.h 00:02:08.416 TEST_HEADER include/spdk/base64.h 00:02:08.416 CC app/spdk_dd/spdk_dd.o 00:02:08.416 TEST_HEADER include/spdk/bdev.h 00:02:08.416 TEST_HEADER include/spdk/bdev_module.h 00:02:08.416 TEST_HEADER include/spdk/bdev_zone.h 00:02:08.416 TEST_HEADER include/spdk/bit_array.h 00:02:08.416 TEST_HEADER include/spdk/bit_pool.h 00:02:08.416 TEST_HEADER include/spdk/blob_bdev.h 00:02:08.416 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:08.416 TEST_HEADER include/spdk/blobfs.h 00:02:08.416 TEST_HEADER include/spdk/blob.h 00:02:08.416 TEST_HEADER include/spdk/conf.h 00:02:08.416 CC app/nvmf_tgt/nvmf_main.o 00:02:08.416 TEST_HEADER include/spdk/config.h 00:02:08.416 TEST_HEADER include/spdk/cpuset.h 00:02:08.416 TEST_HEADER include/spdk/crc16.h 00:02:08.416 TEST_HEADER include/spdk/crc32.h 00:02:08.416 TEST_HEADER include/spdk/crc64.h 00:02:08.416 TEST_HEADER include/spdk/dif.h 00:02:08.416 TEST_HEADER include/spdk/dma.h 00:02:08.416 TEST_HEADER include/spdk/env_dpdk.h 00:02:08.416 TEST_HEADER include/spdk/endian.h 00:02:08.416 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:08.416 TEST_HEADER include/spdk/env.h 00:02:08.416 TEST_HEADER include/spdk/event.h 00:02:08.416 TEST_HEADER include/spdk/fd_group.h 00:02:08.416 TEST_HEADER include/spdk/fd.h 00:02:08.416 TEST_HEADER include/spdk/file.h 00:02:08.416 TEST_HEADER include/spdk/gpt_spec.h 00:02:08.416 CC app/vhost/vhost.o 00:02:08.416 TEST_HEADER include/spdk/ftl.h 00:02:08.416 TEST_HEADER include/spdk/hexlify.h 00:02:08.416 TEST_HEADER include/spdk/histogram_data.h 00:02:08.416 TEST_HEADER include/spdk/idxd.h 00:02:08.416 TEST_HEADER include/spdk/idxd_spec.h 00:02:08.416 TEST_HEADER include/spdk/init.h 00:02:08.416 TEST_HEADER include/spdk/ioat.h 00:02:08.416 TEST_HEADER include/spdk/ioat_spec.h 00:02:08.416 TEST_HEADER include/spdk/iscsi_spec.h 00:02:08.416 TEST_HEADER include/spdk/json.h 00:02:08.416 TEST_HEADER include/spdk/jsonrpc.h 00:02:08.416 TEST_HEADER include/spdk/likely.h 00:02:08.416 TEST_HEADER include/spdk/log.h 00:02:08.416 CC app/iscsi_tgt/iscsi_tgt.o 00:02:08.416 TEST_HEADER include/spdk/lvol.h 00:02:08.416 TEST_HEADER include/spdk/mmio.h 00:02:08.416 TEST_HEADER include/spdk/memory.h 00:02:08.416 TEST_HEADER include/spdk/notify.h 00:02:08.416 TEST_HEADER include/spdk/nbd.h 00:02:08.416 TEST_HEADER include/spdk/nvme.h 00:02:08.416 TEST_HEADER include/spdk/nvme_intel.h 00:02:08.416 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:08.416 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:08.416 TEST_HEADER include/spdk/nvme_spec.h 00:02:08.416 TEST_HEADER include/spdk/nvme_zns.h 00:02:08.416 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:08.416 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:08.416 TEST_HEADER include/spdk/nvmf.h 00:02:08.416 TEST_HEADER include/spdk/nvmf_transport.h 00:02:08.416 TEST_HEADER include/spdk/nvmf_spec.h 00:02:08.416 TEST_HEADER include/spdk/opal.h 00:02:08.416 TEST_HEADER include/spdk/opal_spec.h 00:02:08.416 TEST_HEADER include/spdk/pci_ids.h 00:02:08.416 TEST_HEADER include/spdk/pipe.h 00:02:08.416 CC app/spdk_tgt/spdk_tgt.o 00:02:08.416 TEST_HEADER include/spdk/queue.h 00:02:08.416 TEST_HEADER include/spdk/reduce.h 00:02:08.416 TEST_HEADER include/spdk/rpc.h 00:02:08.416 TEST_HEADER include/spdk/scheduler.h 00:02:08.416 TEST_HEADER include/spdk/scsi.h 00:02:08.416 TEST_HEADER include/spdk/scsi_spec.h 00:02:08.416 TEST_HEADER include/spdk/sock.h 00:02:08.416 TEST_HEADER include/spdk/stdinc.h 00:02:08.416 TEST_HEADER include/spdk/string.h 00:02:08.416 TEST_HEADER include/spdk/trace.h 00:02:08.416 TEST_HEADER include/spdk/thread.h 00:02:08.416 TEST_HEADER include/spdk/trace_parser.h 00:02:08.416 TEST_HEADER include/spdk/tree.h 00:02:08.416 TEST_HEADER include/spdk/uuid.h 00:02:08.416 TEST_HEADER include/spdk/util.h 00:02:08.416 TEST_HEADER include/spdk/ublk.h 00:02:08.416 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:08.416 TEST_HEADER include/spdk/version.h 00:02:08.416 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:08.416 TEST_HEADER include/spdk/vhost.h 00:02:08.416 TEST_HEADER include/spdk/vmd.h 00:02:08.416 TEST_HEADER include/spdk/xor.h 00:02:08.416 TEST_HEADER include/spdk/zipf.h 00:02:08.416 CXX test/cpp_headers/accel_module.o 00:02:08.416 CXX test/cpp_headers/accel.o 00:02:08.416 CXX test/cpp_headers/assert.o 00:02:08.416 CC test/event/reactor/reactor.o 00:02:08.416 CXX test/cpp_headers/barrier.o 00:02:08.416 CXX test/cpp_headers/base64.o 00:02:08.416 CXX test/cpp_headers/bdev.o 00:02:08.416 CXX test/cpp_headers/bdev_module.o 00:02:08.416 CXX test/cpp_headers/bdev_zone.o 00:02:08.416 CC test/event/reactor_perf/reactor_perf.o 00:02:08.416 CC examples/util/zipf/zipf.o 00:02:08.416 CXX test/cpp_headers/bit_array.o 00:02:08.416 CXX test/cpp_headers/bit_pool.o 00:02:08.416 CXX test/cpp_headers/blob_bdev.o 00:02:08.416 CC test/event/event_perf/event_perf.o 00:02:08.416 CXX test/cpp_headers/blobfs_bdev.o 00:02:08.416 CC examples/accel/perf/accel_perf.o 00:02:08.416 CC examples/nvme/hotplug/hotplug.o 00:02:08.417 CXX test/cpp_headers/blobfs.o 00:02:08.417 CC examples/nvme/hello_world/hello_world.o 00:02:08.417 CC test/nvme/compliance/nvme_compliance.o 00:02:08.417 CXX test/cpp_headers/blob.o 00:02:08.417 CC examples/nvme/reconnect/reconnect.o 00:02:08.417 CXX test/cpp_headers/conf.o 00:02:08.417 CC examples/nvme/abort/abort.o 00:02:08.417 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:08.417 CXX test/cpp_headers/crc16.o 00:02:08.417 CXX test/cpp_headers/config.o 00:02:08.417 CXX test/cpp_headers/cpuset.o 00:02:08.417 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:08.417 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:08.417 CXX test/cpp_headers/crc32.o 00:02:08.417 CXX test/cpp_headers/dif.o 00:02:08.417 CXX test/cpp_headers/crc64.o 00:02:08.417 CC examples/idxd/perf/perf.o 00:02:08.417 CXX test/cpp_headers/dma.o 00:02:08.417 CC test/app/jsoncat/jsoncat.o 00:02:08.417 CXX test/cpp_headers/endian.o 00:02:08.417 CC test/nvme/aer/aer.o 00:02:08.417 CXX test/cpp_headers/env_dpdk.o 00:02:08.417 CC test/nvme/overhead/overhead.o 00:02:08.417 CC test/nvme/reset/reset.o 00:02:08.417 CC app/fio/nvme/fio_plugin.o 00:02:08.417 CXX test/cpp_headers/env.o 00:02:08.417 CC test/nvme/err_injection/err_injection.o 00:02:08.417 CXX test/cpp_headers/event.o 00:02:08.417 CXX test/cpp_headers/fd_group.o 00:02:08.417 CC test/nvme/e2edp/nvme_dp.o 00:02:08.417 CC test/nvme/fused_ordering/fused_ordering.o 00:02:08.417 CC examples/nvme/arbitration/arbitration.o 00:02:08.417 CC test/nvme/reserve/reserve.o 00:02:08.417 CXX test/cpp_headers/fd.o 00:02:08.417 CC test/nvme/boot_partition/boot_partition.o 00:02:08.417 CC test/nvme/startup/startup.o 00:02:08.417 CXX test/cpp_headers/file.o 00:02:08.417 CC test/app/histogram_perf/histogram_perf.o 00:02:08.417 CXX test/cpp_headers/ftl.o 00:02:08.417 CC test/nvme/cuse/cuse.o 00:02:08.417 CC test/nvme/sgl/sgl.o 00:02:08.417 CC test/nvme/connect_stress/connect_stress.o 00:02:08.417 CC examples/vmd/lsvmd/lsvmd.o 00:02:08.417 CXX test/cpp_headers/gpt_spec.o 00:02:08.417 CC test/nvme/simple_copy/simple_copy.o 00:02:08.417 CC test/app/stub/stub.o 00:02:08.417 CXX test/cpp_headers/histogram_data.o 00:02:08.417 CC examples/sock/hello_world/hello_sock.o 00:02:08.417 CXX test/cpp_headers/idxd.o 00:02:08.417 CXX test/cpp_headers/hexlify.o 00:02:08.417 CXX test/cpp_headers/idxd_spec.o 00:02:08.417 CC test/nvme/fdp/fdp.o 00:02:08.417 CC test/env/pci/pci_ut.o 00:02:08.417 CC test/env/memory/memory_ut.o 00:02:08.417 CC examples/ioat/verify/verify.o 00:02:08.417 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:08.417 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:08.417 CC test/thread/poller_perf/poller_perf.o 00:02:08.417 CC test/env/vtophys/vtophys.o 00:02:08.417 CC examples/vmd/led/led.o 00:02:08.417 CC examples/ioat/perf/perf.o 00:02:08.417 CC test/event/app_repeat/app_repeat.o 00:02:08.690 CC examples/blob/cli/blobcli.o 00:02:08.690 CC test/event/scheduler/scheduler.o 00:02:08.690 CC examples/thread/thread/thread_ex.o 00:02:08.690 CC test/dma/test_dma/test_dma.o 00:02:08.690 CC test/bdev/bdevio/bdevio.o 00:02:08.690 CC examples/blob/hello_world/hello_blob.o 00:02:08.690 CC examples/nvmf/nvmf/nvmf.o 00:02:08.690 CC app/fio/bdev/fio_plugin.o 00:02:08.690 CC test/accel/dif/dif.o 00:02:08.690 CC examples/bdev/hello_world/hello_bdev.o 00:02:08.690 CC test/app/bdev_svc/bdev_svc.o 00:02:08.690 CC examples/bdev/bdevperf/bdevperf.o 00:02:08.690 CC test/blobfs/mkfs/mkfs.o 00:02:08.690 CXX test/cpp_headers/init.o 00:02:08.690 LINK spdk_lspci 00:02:08.690 CC test/lvol/esnap/esnap.o 00:02:08.690 CC test/env/mem_callbacks/mem_callbacks.o 00:02:08.690 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:08.690 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:08.690 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:08.955 LINK rpc_client_test 00:02:08.955 LINK nvmf_tgt 00:02:08.955 LINK interrupt_tgt 00:02:08.955 LINK spdk_nvme_discover 00:02:08.955 LINK vhost 00:02:08.955 LINK jsoncat 00:02:08.955 LINK lsvmd 00:02:08.955 LINK reactor 00:02:08.955 LINK reactor_perf 00:02:08.955 LINK spdk_trace_record 00:02:08.955 LINK zipf 00:02:08.955 LINK iscsi_tgt 00:02:08.955 LINK spdk_tgt 00:02:08.955 LINK pmr_persistence 00:02:08.955 LINK event_perf 00:02:08.955 LINK boot_partition 00:02:09.222 LINK startup 00:02:09.222 LINK histogram_perf 00:02:09.222 LINK connect_stress 00:02:09.222 LINK vtophys 00:02:09.222 LINK app_repeat 00:02:09.222 LINK stub 00:02:09.222 LINK poller_perf 00:02:09.222 LINK fused_ordering 00:02:09.222 LINK led 00:02:09.222 LINK err_injection 00:02:09.222 LINK cmb_copy 00:02:09.222 LINK env_dpdk_post_init 00:02:09.222 LINK reserve 00:02:09.222 LINK doorbell_aers 00:02:09.222 LINK hello_world 00:02:09.222 LINK bdev_svc 00:02:09.222 LINK mkfs 00:02:09.222 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:09.222 CXX test/cpp_headers/ioat.o 00:02:09.222 LINK hotplug 00:02:09.222 LINK hello_sock 00:02:09.222 CXX test/cpp_headers/ioat_spec.o 00:02:09.222 CXX test/cpp_headers/iscsi_spec.o 00:02:09.222 LINK scheduler 00:02:09.222 CXX test/cpp_headers/json.o 00:02:09.222 CXX test/cpp_headers/jsonrpc.o 00:02:09.222 LINK sgl 00:02:09.222 LINK ioat_perf 00:02:09.222 CXX test/cpp_headers/likely.o 00:02:09.222 CXX test/cpp_headers/log.o 00:02:09.222 LINK verify 00:02:09.222 CXX test/cpp_headers/lvol.o 00:02:09.222 CXX test/cpp_headers/memory.o 00:02:09.222 LINK simple_copy 00:02:09.222 CXX test/cpp_headers/mmio.o 00:02:09.222 LINK reset 00:02:09.222 CXX test/cpp_headers/nbd.o 00:02:09.222 LINK hello_blob 00:02:09.222 CXX test/cpp_headers/notify.o 00:02:09.222 CXX test/cpp_headers/nvme.o 00:02:09.222 CXX test/cpp_headers/nvme_intel.o 00:02:09.222 CXX test/cpp_headers/nvme_ocssd.o 00:02:09.222 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:09.222 CXX test/cpp_headers/nvme_spec.o 00:02:09.222 CXX test/cpp_headers/nvme_zns.o 00:02:09.222 CXX test/cpp_headers/nvmf_cmd.o 00:02:09.222 LINK nvme_dp 00:02:09.222 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:09.222 CXX test/cpp_headers/nvmf.o 00:02:09.222 CXX test/cpp_headers/nvmf_spec.o 00:02:09.222 CXX test/cpp_headers/nvmf_transport.o 00:02:09.222 CXX test/cpp_headers/opal.o 00:02:09.222 CXX test/cpp_headers/opal_spec.o 00:02:09.222 CXX test/cpp_headers/pci_ids.o 00:02:09.222 CXX test/cpp_headers/pipe.o 00:02:09.222 CXX test/cpp_headers/queue.o 00:02:09.222 LINK thread 00:02:09.222 CXX test/cpp_headers/reduce.o 00:02:09.222 CXX test/cpp_headers/rpc.o 00:02:09.222 LINK spdk_dd 00:02:09.222 CXX test/cpp_headers/scheduler.o 00:02:09.222 CXX test/cpp_headers/scsi.o 00:02:09.222 CXX test/cpp_headers/scsi_spec.o 00:02:09.222 LINK overhead 00:02:09.222 CXX test/cpp_headers/sock.o 00:02:09.222 CXX test/cpp_headers/stdinc.o 00:02:09.222 CXX test/cpp_headers/string.o 00:02:09.222 CXX test/cpp_headers/thread.o 00:02:09.222 LINK hello_bdev 00:02:09.222 CXX test/cpp_headers/trace.o 00:02:09.222 CXX test/cpp_headers/trace_parser.o 00:02:09.480 LINK idxd_perf 00:02:09.480 LINK reconnect 00:02:09.480 LINK nvme_compliance 00:02:09.480 LINK fdp 00:02:09.480 LINK nvmf 00:02:09.480 LINK spdk_trace 00:02:09.480 LINK aer 00:02:09.480 CXX test/cpp_headers/tree.o 00:02:09.480 LINK arbitration 00:02:09.480 CXX test/cpp_headers/ublk.o 00:02:09.480 CXX test/cpp_headers/util.o 00:02:09.480 CXX test/cpp_headers/uuid.o 00:02:09.480 CXX test/cpp_headers/version.o 00:02:09.480 CXX test/cpp_headers/vfio_user_spec.o 00:02:09.480 CXX test/cpp_headers/vfio_user_pci.o 00:02:09.480 LINK test_dma 00:02:09.480 LINK abort 00:02:09.480 LINK bdevio 00:02:09.480 LINK dif 00:02:09.480 CXX test/cpp_headers/vhost.o 00:02:09.480 CXX test/cpp_headers/vmd.o 00:02:09.480 CXX test/cpp_headers/xor.o 00:02:09.480 CXX test/cpp_headers/zipf.o 00:02:09.480 LINK pci_ut 00:02:09.737 LINK accel_perf 00:02:09.737 LINK blobcli 00:02:09.737 LINK nvme_manage 00:02:09.737 LINK spdk_nvme 00:02:09.737 LINK spdk_bdev 00:02:09.737 LINK nvme_fuzz 00:02:09.737 LINK spdk_nvme_identify 00:02:09.996 LINK vhost_fuzz 00:02:09.996 LINK mem_callbacks 00:02:09.996 LINK spdk_nvme_perf 00:02:09.996 LINK spdk_top 00:02:09.996 LINK bdevperf 00:02:09.996 LINK memory_ut 00:02:09.996 LINK cuse 00:02:10.563 LINK iscsi_fuzz 00:02:12.517 LINK esnap 00:02:12.776 00:02:12.776 real 0m30.671s 00:02:12.776 user 4m51.491s 00:02:12.776 sys 2m45.051s 00:02:12.776 11:25:41 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:12.776 11:25:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.776 ************************************ 00:02:12.776 END TEST make 00:02:12.776 ************************************ 00:02:12.776 11:25:42 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:12.776 11:25:42 -- nvmf/common.sh@7 -- # uname -s 00:02:12.776 11:25:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:12.776 11:25:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:12.776 11:25:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:12.776 11:25:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:12.776 11:25:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:12.776 11:25:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:12.776 11:25:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:12.776 11:25:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:12.776 11:25:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:12.776 11:25:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:12.776 11:25:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:02:12.776 11:25:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:02:12.776 11:25:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:12.776 11:25:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:12.776 11:25:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:12.776 11:25:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:12.776 11:25:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:12.776 11:25:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:12.776 11:25:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:12.776 11:25:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.776 11:25:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.776 11:25:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.776 11:25:42 -- paths/export.sh@5 -- # export PATH 00:02:12.776 11:25:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.776 11:25:42 -- nvmf/common.sh@46 -- # : 0 00:02:12.776 11:25:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:12.776 11:25:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:12.776 11:25:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:12.776 11:25:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:12.776 11:25:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:12.776 11:25:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:12.776 11:25:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:12.776 11:25:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:12.776 11:25:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:12.776 11:25:42 -- spdk/autotest.sh@32 -- # uname -s 00:02:12.777 11:25:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:12.777 11:25:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:12.777 11:25:42 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:12.777 11:25:42 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:12.777 11:25:42 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:12.777 11:25:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:12.777 11:25:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:12.777 11:25:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:12.777 11:25:42 -- spdk/autotest.sh@48 -- # udevadm_pid=2098038 00:02:12.777 11:25:42 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:12.777 11:25:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:12.777 11:25:42 -- spdk/autotest.sh@54 -- # echo 2098040 00:02:12.777 11:25:42 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:12.777 11:25:42 -- spdk/autotest.sh@56 -- # echo 2098041 00:02:12.777 11:25:42 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:12.777 11:25:42 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:12.777 11:25:42 -- spdk/autotest.sh@60 -- # echo 2098042 00:02:12.777 11:25:42 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:02:12.777 11:25:42 -- spdk/autotest.sh@62 -- # echo 2098043 00:02:12.777 11:25:42 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:02:12.777 11:25:42 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:12.777 11:25:42 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:12.777 11:25:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:12.777 11:25:42 -- common/autotest_common.sh@10 -- # set +x 00:02:12.777 11:25:42 -- spdk/autotest.sh@70 -- # create_test_list 00:02:12.777 11:25:42 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:12.777 11:25:42 -- common/autotest_common.sh@10 -- # set +x 00:02:12.777 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:12.777 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:12.777 11:25:42 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:12.777 11:25:42 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:13.035 11:25:42 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:13.035 11:25:42 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:13.035 11:25:42 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:13.035 11:25:42 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:13.035 11:25:42 -- common/autotest_common.sh@1440 -- # uname 00:02:13.035 11:25:42 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:13.035 11:25:42 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:13.035 11:25:42 -- common/autotest_common.sh@1460 -- # uname 00:02:13.035 11:25:42 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:13.035 11:25:42 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:13.035 11:25:42 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:13.035 11:25:42 -- spdk/autotest.sh@83 -- # hash lcov 00:02:13.035 11:25:42 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:13.035 11:25:42 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:13.035 --rc lcov_branch_coverage=1 00:02:13.035 --rc lcov_function_coverage=1 00:02:13.035 --rc genhtml_branch_coverage=1 00:02:13.035 --rc genhtml_function_coverage=1 00:02:13.035 --rc genhtml_legend=1 00:02:13.035 --rc geninfo_all_blocks=1 00:02:13.035 ' 00:02:13.035 11:25:42 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:13.035 --rc lcov_branch_coverage=1 00:02:13.035 --rc lcov_function_coverage=1 00:02:13.035 --rc genhtml_branch_coverage=1 00:02:13.035 --rc genhtml_function_coverage=1 00:02:13.035 --rc genhtml_legend=1 00:02:13.035 --rc geninfo_all_blocks=1 00:02:13.035 ' 00:02:13.035 11:25:42 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:13.035 --rc lcov_branch_coverage=1 00:02:13.035 --rc lcov_function_coverage=1 00:02:13.035 --rc genhtml_branch_coverage=1 00:02:13.035 --rc genhtml_function_coverage=1 00:02:13.035 --rc genhtml_legend=1 00:02:13.035 --rc geninfo_all_blocks=1 00:02:13.035 --no-external' 00:02:13.035 11:25:42 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:13.035 --rc lcov_branch_coverage=1 00:02:13.035 --rc lcov_function_coverage=1 00:02:13.035 --rc genhtml_branch_coverage=1 00:02:13.035 --rc genhtml_function_coverage=1 00:02:13.035 --rc genhtml_legend=1 00:02:13.035 --rc geninfo_all_blocks=1 00:02:13.035 --no-external' 00:02:13.035 11:25:42 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:13.035 lcov: LCOV version 1.14 00:02:13.036 11:25:42 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:14.411 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:14.411 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:14.411 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:14.411 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:14.411 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:14.411 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:14.411 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:14.411 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:14.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:14.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:14.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:14.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:14.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:14.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:14.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:14.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:14.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:14.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:14.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:14.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:14.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:14.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:14.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:14.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:14.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:14.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:14.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:14.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:14.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:14.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:14.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:14.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:14.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:14.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:14.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:14.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:14.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:14.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:14.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:14.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:14.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:14.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:14.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:27.135 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:27.135 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:27.135 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:27.135 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:27.135 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:27.135 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:37.109 11:26:05 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:37.109 11:26:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:37.109 11:26:05 -- common/autotest_common.sh@10 -- # set +x 00:02:37.109 11:26:05 -- spdk/autotest.sh@102 -- # rm -f 00:02:37.109 11:26:05 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:40.399 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:40.399 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:40.399 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:40.399 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:40.399 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:40.399 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:40.658 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:40.658 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:40.658 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:40.658 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:40.658 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:40.658 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:40.658 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:40.658 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:40.658 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:40.917 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:40.917 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:40.917 11:26:10 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:40.917 11:26:10 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:40.917 11:26:10 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:40.917 11:26:10 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:40.917 11:26:10 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:40.917 11:26:10 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:40.917 11:26:10 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:40.917 11:26:10 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:40.917 11:26:10 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:40.917 11:26:10 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:40.917 11:26:10 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:02:40.917 11:26:10 -- spdk/autotest.sh@121 -- # grep -v p 00:02:40.917 11:26:10 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:40.917 11:26:10 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:40.917 11:26:10 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:40.917 11:26:10 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:40.917 11:26:10 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:40.917 No valid GPT data, bailing 00:02:40.917 11:26:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:40.917 11:26:10 -- scripts/common.sh@393 -- # pt= 00:02:40.917 11:26:10 -- scripts/common.sh@394 -- # return 1 00:02:40.917 11:26:10 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:40.917 1+0 records in 00:02:40.917 1+0 records out 00:02:40.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00440882 s, 238 MB/s 00:02:40.917 11:26:10 -- spdk/autotest.sh@129 -- # sync 00:02:40.917 11:26:10 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:40.917 11:26:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:40.917 11:26:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:47.484 11:26:16 -- spdk/autotest.sh@135 -- # uname -s 00:02:47.484 11:26:16 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:02:47.484 11:26:16 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:47.484 11:26:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:47.484 11:26:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:47.484 11:26:16 -- common/autotest_common.sh@10 -- # set +x 00:02:47.484 ************************************ 00:02:47.484 START TEST setup.sh 00:02:47.484 ************************************ 00:02:47.484 11:26:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:47.484 * Looking for test storage... 00:02:47.484 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:47.484 11:26:16 -- setup/test-setup.sh@10 -- # uname -s 00:02:47.484 11:26:16 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:47.484 11:26:16 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:47.484 11:26:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:47.484 11:26:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:47.484 11:26:16 -- common/autotest_common.sh@10 -- # set +x 00:02:47.484 ************************************ 00:02:47.484 START TEST acl 00:02:47.484 ************************************ 00:02:47.484 11:26:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:47.484 * Looking for test storage... 00:02:47.484 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:47.484 11:26:16 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:47.484 11:26:16 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:47.484 11:26:16 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:47.484 11:26:16 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:47.484 11:26:16 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:47.484 11:26:16 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:47.484 11:26:16 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:47.484 11:26:16 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:47.484 11:26:16 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:47.484 11:26:16 -- setup/acl.sh@12 -- # devs=() 00:02:47.484 11:26:16 -- setup/acl.sh@12 -- # declare -a devs 00:02:47.484 11:26:16 -- setup/acl.sh@13 -- # drivers=() 00:02:47.484 11:26:16 -- setup/acl.sh@13 -- # declare -A drivers 00:02:47.484 11:26:16 -- setup/acl.sh@51 -- # setup reset 00:02:47.484 11:26:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:47.484 11:26:16 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.752 11:26:21 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:52.752 11:26:21 -- setup/acl.sh@16 -- # local dev driver 00:02:52.752 11:26:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.752 11:26:21 -- setup/acl.sh@15 -- # setup output status 00:02:52.752 11:26:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.752 11:26:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:56.057 Hugepages 00:02:56.057 node hugesize free / total 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 00:02:56.057 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # continue 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:56.057 11:26:25 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:56.057 11:26:25 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:56.057 11:26:25 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:56.057 11:26:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.057 11:26:25 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:56.057 11:26:25 -- setup/acl.sh@54 -- # run_test denied denied 00:02:56.057 11:26:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:56.057 11:26:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:56.057 11:26:25 -- common/autotest_common.sh@10 -- # set +x 00:02:56.057 ************************************ 00:02:56.057 START TEST denied 00:02:56.057 ************************************ 00:02:56.057 11:26:25 -- common/autotest_common.sh@1104 -- # denied 00:02:56.057 11:26:25 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:56.057 11:26:25 -- setup/acl.sh@38 -- # setup output config 00:02:56.057 11:26:25 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:56.057 11:26:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.057 11:26:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:01.414 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:01.414 11:26:29 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:01.414 11:26:29 -- setup/acl.sh@28 -- # local dev driver 00:03:01.414 11:26:29 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:01.414 11:26:29 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:01.414 11:26:29 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:01.414 11:26:29 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:01.414 11:26:29 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:01.414 11:26:29 -- setup/acl.sh@41 -- # setup reset 00:03:01.414 11:26:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.414 11:26:29 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.692 00:03:06.692 real 0m9.635s 00:03:06.692 user 0m3.162s 00:03:06.692 sys 0m5.878s 00:03:06.692 11:26:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.692 11:26:35 -- common/autotest_common.sh@10 -- # set +x 00:03:06.692 ************************************ 00:03:06.692 END TEST denied 00:03:06.692 ************************************ 00:03:06.692 11:26:35 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:06.692 11:26:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:06.692 11:26:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:06.692 11:26:35 -- common/autotest_common.sh@10 -- # set +x 00:03:06.692 ************************************ 00:03:06.692 START TEST allowed 00:03:06.692 ************************************ 00:03:06.692 11:26:35 -- common/autotest_common.sh@1104 -- # allowed 00:03:06.692 11:26:35 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:06.692 11:26:35 -- setup/acl.sh@45 -- # setup output config 00:03:06.692 11:26:35 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:06.692 11:26:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.692 11:26:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:11.963 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:11.963 11:26:40 -- setup/acl.sh@47 -- # verify 00:03:11.963 11:26:40 -- setup/acl.sh@28 -- # local dev driver 00:03:11.963 11:26:40 -- setup/acl.sh@48 -- # setup reset 00:03:11.963 11:26:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.963 11:26:40 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:16.145 00:03:16.145 real 0m9.747s 00:03:16.145 user 0m2.412s 00:03:16.145 sys 0m5.174s 00:03:16.145 11:26:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.145 11:26:44 -- common/autotest_common.sh@10 -- # set +x 00:03:16.145 ************************************ 00:03:16.145 END TEST allowed 00:03:16.145 ************************************ 00:03:16.145 00:03:16.145 real 0m28.199s 00:03:16.145 user 0m8.685s 00:03:16.145 sys 0m17.035s 00:03:16.146 11:26:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.146 11:26:44 -- common/autotest_common.sh@10 -- # set +x 00:03:16.146 ************************************ 00:03:16.146 END TEST acl 00:03:16.146 ************************************ 00:03:16.146 11:26:44 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:16.146 11:26:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:16.146 11:26:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:16.146 11:26:44 -- common/autotest_common.sh@10 -- # set +x 00:03:16.146 ************************************ 00:03:16.146 START TEST hugepages 00:03:16.146 ************************************ 00:03:16.146 11:26:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:16.146 * Looking for test storage... 00:03:16.146 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:16.146 11:26:45 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:16.146 11:26:45 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:16.146 11:26:45 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:16.146 11:26:45 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:16.146 11:26:45 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:16.146 11:26:45 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:16.146 11:26:45 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:16.146 11:26:45 -- setup/common.sh@18 -- # local node= 00:03:16.146 11:26:45 -- setup/common.sh@19 -- # local var val 00:03:16.146 11:26:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.146 11:26:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.146 11:26:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.146 11:26:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.146 11:26:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.146 11:26:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 35932596 kB' 'MemAvailable: 39582892 kB' 'Buffers: 4096 kB' 'Cached: 16043668 kB' 'SwapCached: 0 kB' 'Active: 13037580 kB' 'Inactive: 3524076 kB' 'Active(anon): 12560184 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517420 kB' 'Mapped: 179076 kB' 'Shmem: 12046292 kB' 'KReclaimable: 291492 kB' 'Slab: 965064 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 673572 kB' 'KernelStack: 22576 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439048 kB' 'Committed_AS: 14029468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220176 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 11:26:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.147 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.147 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 11:26:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.147 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.147 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 11:26:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.147 11:26:45 -- setup/common.sh@32 -- # continue 00:03:16.147 11:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 11:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 11:26:45 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.147 11:26:45 -- setup/common.sh@33 -- # echo 2048 00:03:16.147 11:26:45 -- setup/common.sh@33 -- # return 0 00:03:16.147 11:26:45 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:16.147 11:26:45 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:16.147 11:26:45 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:16.147 11:26:45 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:16.147 11:26:45 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:16.147 11:26:45 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:16.147 11:26:45 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:16.147 11:26:45 -- setup/hugepages.sh@207 -- # get_nodes 00:03:16.147 11:26:45 -- setup/hugepages.sh@27 -- # local node 00:03:16.147 11:26:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.147 11:26:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:16.147 11:26:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.147 11:26:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:16.147 11:26:45 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.147 11:26:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.147 11:26:45 -- setup/hugepages.sh@208 -- # clear_hp 00:03:16.147 11:26:45 -- setup/hugepages.sh@37 -- # local node hp 00:03:16.147 11:26:45 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:16.147 11:26:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.147 11:26:45 -- setup/hugepages.sh@41 -- # echo 0 00:03:16.147 11:26:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.147 11:26:45 -- setup/hugepages.sh@41 -- # echo 0 00:03:16.147 11:26:45 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:16.147 11:26:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.147 11:26:45 -- setup/hugepages.sh@41 -- # echo 0 00:03:16.147 11:26:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.147 11:26:45 -- setup/hugepages.sh@41 -- # echo 0 00:03:16.147 11:26:45 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:16.147 11:26:45 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:16.147 11:26:45 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:16.147 11:26:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:16.147 11:26:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:16.147 11:26:45 -- common/autotest_common.sh@10 -- # set +x 00:03:16.147 ************************************ 00:03:16.147 START TEST default_setup 00:03:16.147 ************************************ 00:03:16.147 11:26:45 -- common/autotest_common.sh@1104 -- # default_setup 00:03:16.147 11:26:45 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:16.147 11:26:45 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:16.147 11:26:45 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:16.147 11:26:45 -- setup/hugepages.sh@51 -- # shift 00:03:16.147 11:26:45 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:16.147 11:26:45 -- setup/hugepages.sh@52 -- # local node_ids 00:03:16.147 11:26:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.147 11:26:45 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:16.147 11:26:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:16.147 11:26:45 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:16.147 11:26:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.147 11:26:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:16.147 11:26:45 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.147 11:26:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.147 11:26:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.147 11:26:45 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:16.147 11:26:45 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.147 11:26:45 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:16.147 11:26:45 -- setup/hugepages.sh@73 -- # return 0 00:03:16.147 11:26:45 -- setup/hugepages.sh@137 -- # setup output 00:03:16.147 11:26:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.147 11:26:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:20.330 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:20.330 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:20.330 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:20.330 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:20.330 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:20.330 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:20.330 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:20.330 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:20.330 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:20.330 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:20.330 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:20.330 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:20.330 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:20.330 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:20.330 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:20.330 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:22.228 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:22.228 11:26:51 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:22.228 11:26:51 -- setup/hugepages.sh@89 -- # local node 00:03:22.228 11:26:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.228 11:26:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.228 11:26:51 -- setup/hugepages.sh@92 -- # local surp 00:03:22.228 11:26:51 -- setup/hugepages.sh@93 -- # local resv 00:03:22.228 11:26:51 -- setup/hugepages.sh@94 -- # local anon 00:03:22.228 11:26:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.228 11:26:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.228 11:26:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.228 11:26:51 -- setup/common.sh@18 -- # local node= 00:03:22.228 11:26:51 -- setup/common.sh@19 -- # local var val 00:03:22.228 11:26:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.228 11:26:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.228 11:26:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.228 11:26:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.228 11:26:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.228 11:26:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38088128 kB' 'MemAvailable: 41738424 kB' 'Buffers: 4096 kB' 'Cached: 16043804 kB' 'SwapCached: 0 kB' 'Active: 13052784 kB' 'Inactive: 3524076 kB' 'Active(anon): 12575388 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532944 kB' 'Mapped: 179140 kB' 'Shmem: 12046428 kB' 'KReclaimable: 291492 kB' 'Slab: 963132 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671640 kB' 'KernelStack: 22512 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14042572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220176 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.228 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.228 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.229 11:26:51 -- setup/common.sh@33 -- # echo 0 00:03:22.229 11:26:51 -- setup/common.sh@33 -- # return 0 00:03:22.229 11:26:51 -- setup/hugepages.sh@97 -- # anon=0 00:03:22.229 11:26:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.229 11:26:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.229 11:26:51 -- setup/common.sh@18 -- # local node= 00:03:22.229 11:26:51 -- setup/common.sh@19 -- # local var val 00:03:22.229 11:26:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.229 11:26:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.229 11:26:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.229 11:26:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.229 11:26:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.229 11:26:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38088928 kB' 'MemAvailable: 41739224 kB' 'Buffers: 4096 kB' 'Cached: 16043804 kB' 'SwapCached: 0 kB' 'Active: 13053604 kB' 'Inactive: 3524076 kB' 'Active(anon): 12576208 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533228 kB' 'Mapped: 179276 kB' 'Shmem: 12046428 kB' 'KReclaimable: 291492 kB' 'Slab: 963212 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671720 kB' 'KernelStack: 22672 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14042584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220144 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.229 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.229 11:26:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.230 11:26:51 -- setup/common.sh@33 -- # echo 0 00:03:22.230 11:26:51 -- setup/common.sh@33 -- # return 0 00:03:22.230 11:26:51 -- setup/hugepages.sh@99 -- # surp=0 00:03:22.230 11:26:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.230 11:26:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.230 11:26:51 -- setup/common.sh@18 -- # local node= 00:03:22.230 11:26:51 -- setup/common.sh@19 -- # local var val 00:03:22.230 11:26:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.230 11:26:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.230 11:26:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.230 11:26:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.230 11:26:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.230 11:26:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38089768 kB' 'MemAvailable: 41740064 kB' 'Buffers: 4096 kB' 'Cached: 16043804 kB' 'SwapCached: 0 kB' 'Active: 13053092 kB' 'Inactive: 3524076 kB' 'Active(anon): 12575696 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532704 kB' 'Mapped: 179276 kB' 'Shmem: 12046428 kB' 'KReclaimable: 291492 kB' 'Slab: 963212 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671720 kB' 'KernelStack: 22736 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14042596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220096 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.230 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.230 11:26:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.231 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.231 11:26:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.231 11:26:51 -- setup/common.sh@33 -- # echo 0 00:03:22.231 11:26:51 -- setup/common.sh@33 -- # return 0 00:03:22.231 11:26:51 -- setup/hugepages.sh@100 -- # resv=0 00:03:22.231 11:26:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.231 nr_hugepages=1024 00:03:22.232 11:26:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.232 resv_hugepages=0 00:03:22.232 11:26:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.232 surplus_hugepages=0 00:03:22.232 11:26:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.232 anon_hugepages=0 00:03:22.232 11:26:51 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.232 11:26:51 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.232 11:26:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.232 11:26:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.232 11:26:51 -- setup/common.sh@18 -- # local node= 00:03:22.232 11:26:51 -- setup/common.sh@19 -- # local var val 00:03:22.232 11:26:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.232 11:26:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.232 11:26:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.232 11:26:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.232 11:26:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.232 11:26:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38088852 kB' 'MemAvailable: 41739148 kB' 'Buffers: 4096 kB' 'Cached: 16043804 kB' 'SwapCached: 0 kB' 'Active: 13053260 kB' 'Inactive: 3524076 kB' 'Active(anon): 12575864 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532836 kB' 'Mapped: 179192 kB' 'Shmem: 12046428 kB' 'KReclaimable: 291492 kB' 'Slab: 963168 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671676 kB' 'KernelStack: 22736 kB' 'PageTables: 9208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14042612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220192 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.232 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.232 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.233 11:26:51 -- setup/common.sh@33 -- # echo 1024 00:03:22.233 11:26:51 -- setup/common.sh@33 -- # return 0 00:03:22.233 11:26:51 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.233 11:26:51 -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.233 11:26:51 -- setup/hugepages.sh@27 -- # local node 00:03:22.233 11:26:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.233 11:26:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:22.233 11:26:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.233 11:26:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:22.233 11:26:51 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.233 11:26:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.233 11:26:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.233 11:26:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.233 11:26:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.233 11:26:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.233 11:26:51 -- setup/common.sh@18 -- # local node=0 00:03:22.233 11:26:51 -- setup/common.sh@19 -- # local var val 00:03:22.233 11:26:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.233 11:26:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.233 11:26:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.233 11:26:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.233 11:26:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.233 11:26:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 20734796 kB' 'MemUsed: 11857288 kB' 'SwapCached: 0 kB' 'Active: 7902096 kB' 'Inactive: 269776 kB' 'Active(anon): 7624788 kB' 'Inactive(anon): 0 kB' 'Active(file): 277308 kB' 'Inactive(file): 269776 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7828976 kB' 'Mapped: 91544 kB' 'AnonPages: 346040 kB' 'Shmem: 7281892 kB' 'KernelStack: 12200 kB' 'PageTables: 6172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 160156 kB' 'Slab: 501432 kB' 'SReclaimable: 160156 kB' 'SUnreclaim: 341276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.233 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.233 11:26:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # continue 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.234 11:26:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.234 11:26:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.234 11:26:51 -- setup/common.sh@33 -- # echo 0 00:03:22.234 11:26:51 -- setup/common.sh@33 -- # return 0 00:03:22.234 11:26:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.234 11:26:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.234 11:26:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.234 11:26:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.234 11:26:51 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:22.234 node0=1024 expecting 1024 00:03:22.234 11:26:51 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:22.234 00:03:22.234 real 0m6.482s 00:03:22.234 user 0m1.641s 00:03:22.234 sys 0m2.952s 00:03:22.234 11:26:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.234 11:26:51 -- common/autotest_common.sh@10 -- # set +x 00:03:22.234 ************************************ 00:03:22.234 END TEST default_setup 00:03:22.234 ************************************ 00:03:22.491 11:26:51 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:22.491 11:26:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:22.491 11:26:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:22.491 11:26:51 -- common/autotest_common.sh@10 -- # set +x 00:03:22.491 ************************************ 00:03:22.491 START TEST per_node_1G_alloc 00:03:22.492 ************************************ 00:03:22.492 11:26:51 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:22.492 11:26:51 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:22.492 11:26:51 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:22.492 11:26:51 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:22.492 11:26:51 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:22.492 11:26:51 -- setup/hugepages.sh@51 -- # shift 00:03:22.492 11:26:51 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:22.492 11:26:51 -- setup/hugepages.sh@52 -- # local node_ids 00:03:22.492 11:26:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.492 11:26:51 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:22.492 11:26:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:22.492 11:26:51 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:22.492 11:26:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.492 11:26:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:22.492 11:26:51 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.492 11:26:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.492 11:26:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.492 11:26:51 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:22.492 11:26:51 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:22.492 11:26:51 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:22.492 11:26:51 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:22.492 11:26:51 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:22.492 11:26:51 -- setup/hugepages.sh@73 -- # return 0 00:03:22.492 11:26:51 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:22.492 11:26:51 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:22.492 11:26:51 -- setup/hugepages.sh@146 -- # setup output 00:03:22.492 11:26:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.492 11:26:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:26.677 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.677 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.677 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.677 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.677 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.677 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.677 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.677 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.677 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.677 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.677 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.677 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.677 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.677 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.677 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.677 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.677 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.677 11:26:55 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:26.677 11:26:55 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:26.677 11:26:55 -- setup/hugepages.sh@89 -- # local node 00:03:26.677 11:26:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.677 11:26:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.677 11:26:55 -- setup/hugepages.sh@92 -- # local surp 00:03:26.677 11:26:55 -- setup/hugepages.sh@93 -- # local resv 00:03:26.677 11:26:55 -- setup/hugepages.sh@94 -- # local anon 00:03:26.677 11:26:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.677 11:26:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.677 11:26:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.677 11:26:55 -- setup/common.sh@18 -- # local node= 00:03:26.677 11:26:55 -- setup/common.sh@19 -- # local var val 00:03:26.677 11:26:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.677 11:26:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.677 11:26:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.677 11:26:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.677 11:26:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.677 11:26:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.677 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38126364 kB' 'MemAvailable: 41776660 kB' 'Buffers: 4096 kB' 'Cached: 16043940 kB' 'SwapCached: 0 kB' 'Active: 13051132 kB' 'Inactive: 3524076 kB' 'Active(anon): 12573736 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530436 kB' 'Mapped: 178116 kB' 'Shmem: 12046564 kB' 'KReclaimable: 291492 kB' 'Slab: 963420 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671928 kB' 'KernelStack: 22448 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14032272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220128 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:55 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.678 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 11:26:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.678 11:26:56 -- setup/common.sh@33 -- # echo 0 00:03:26.678 11:26:56 -- setup/common.sh@33 -- # return 0 00:03:26.678 11:26:56 -- setup/hugepages.sh@97 -- # anon=0 00:03:26.678 11:26:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.678 11:26:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.678 11:26:56 -- setup/common.sh@18 -- # local node= 00:03:26.678 11:26:56 -- setup/common.sh@19 -- # local var val 00:03:26.679 11:26:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.679 11:26:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.679 11:26:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.679 11:26:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.679 11:26:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.679 11:26:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.679 11:26:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38127480 kB' 'MemAvailable: 41777776 kB' 'Buffers: 4096 kB' 'Cached: 16043944 kB' 'SwapCached: 0 kB' 'Active: 13051280 kB' 'Inactive: 3524076 kB' 'Active(anon): 12573884 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530604 kB' 'Mapped: 178116 kB' 'Shmem: 12046568 kB' 'KReclaimable: 291492 kB' 'Slab: 963444 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671952 kB' 'KernelStack: 22448 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14032284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220112 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.680 11:26:56 -- setup/common.sh@33 -- # echo 0 00:03:26.680 11:26:56 -- setup/common.sh@33 -- # return 0 00:03:26.680 11:26:56 -- setup/hugepages.sh@99 -- # surp=0 00:03:26.680 11:26:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.680 11:26:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.680 11:26:56 -- setup/common.sh@18 -- # local node= 00:03:26.680 11:26:56 -- setup/common.sh@19 -- # local var val 00:03:26.680 11:26:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.680 11:26:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.680 11:26:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.680 11:26:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.680 11:26:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.680 11:26:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38127732 kB' 'MemAvailable: 41778028 kB' 'Buffers: 4096 kB' 'Cached: 16043956 kB' 'SwapCached: 0 kB' 'Active: 13052040 kB' 'Inactive: 3524076 kB' 'Active(anon): 12574644 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531396 kB' 'Mapped: 178116 kB' 'Shmem: 12046580 kB' 'KReclaimable: 291492 kB' 'Slab: 963444 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671952 kB' 'KernelStack: 22496 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14036848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220112 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.680 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 11:26:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.681 11:26:56 -- setup/common.sh@33 -- # echo 0 00:03:26.681 11:26:56 -- setup/common.sh@33 -- # return 0 00:03:26.681 11:26:56 -- setup/hugepages.sh@100 -- # resv=0 00:03:26.681 11:26:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.681 nr_hugepages=1024 00:03:26.681 11:26:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.681 resv_hugepages=0 00:03:26.681 11:26:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.681 surplus_hugepages=0 00:03:26.681 11:26:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.681 anon_hugepages=0 00:03:26.681 11:26:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.681 11:26:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.681 11:26:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.681 11:26:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.681 11:26:56 -- setup/common.sh@18 -- # local node= 00:03:26.681 11:26:56 -- setup/common.sh@19 -- # local var val 00:03:26.681 11:26:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.681 11:26:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.681 11:26:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.681 11:26:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.681 11:26:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.681 11:26:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38127460 kB' 'MemAvailable: 41777756 kB' 'Buffers: 4096 kB' 'Cached: 16043956 kB' 'SwapCached: 0 kB' 'Active: 13051872 kB' 'Inactive: 3524076 kB' 'Active(anon): 12574476 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531164 kB' 'Mapped: 178116 kB' 'Shmem: 12046580 kB' 'KReclaimable: 291492 kB' 'Slab: 963436 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671944 kB' 'KernelStack: 22448 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14036656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220128 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.682 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 11:26:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.683 11:26:56 -- setup/common.sh@33 -- # echo 1024 00:03:26.683 11:26:56 -- setup/common.sh@33 -- # return 0 00:03:26.683 11:26:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.683 11:26:56 -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.683 11:26:56 -- setup/hugepages.sh@27 -- # local node 00:03:26.942 11:26:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.942 11:26:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.942 11:26:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.942 11:26:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.942 11:26:56 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.942 11:26:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.942 11:26:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.942 11:26:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.942 11:26:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.942 11:26:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.942 11:26:56 -- setup/common.sh@18 -- # local node=0 00:03:26.942 11:26:56 -- setup/common.sh@19 -- # local var val 00:03:26.942 11:26:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.942 11:26:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.943 11:26:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.943 11:26:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.943 11:26:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.943 11:26:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21821296 kB' 'MemUsed: 10770788 kB' 'SwapCached: 0 kB' 'Active: 7903216 kB' 'Inactive: 269776 kB' 'Active(anon): 7625908 kB' 'Inactive(anon): 0 kB' 'Active(file): 277308 kB' 'Inactive(file): 269776 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7829044 kB' 'Mapped: 91036 kB' 'AnonPages: 347256 kB' 'Shmem: 7281960 kB' 'KernelStack: 12040 kB' 'PageTables: 5656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 160156 kB' 'Slab: 501828 kB' 'SReclaimable: 160156 kB' 'SUnreclaim: 341672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@33 -- # echo 0 00:03:26.944 11:26:56 -- setup/common.sh@33 -- # return 0 00:03:26.944 11:26:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.944 11:26:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.944 11:26:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.944 11:26:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.944 11:26:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.944 11:26:56 -- setup/common.sh@18 -- # local node=1 00:03:26.944 11:26:56 -- setup/common.sh@19 -- # local var val 00:03:26.944 11:26:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.944 11:26:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.944 11:26:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.944 11:26:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.944 11:26:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.944 11:26:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 16308460 kB' 'MemUsed: 11394648 kB' 'SwapCached: 0 kB' 'Active: 5149012 kB' 'Inactive: 3254300 kB' 'Active(anon): 4948924 kB' 'Inactive(anon): 0 kB' 'Active(file): 200088 kB' 'Inactive(file): 3254300 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8219052 kB' 'Mapped: 87080 kB' 'AnonPages: 184300 kB' 'Shmem: 4764664 kB' 'KernelStack: 10504 kB' 'PageTables: 2660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131336 kB' 'Slab: 461640 kB' 'SReclaimable: 131336 kB' 'SUnreclaim: 330304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 11:26:56 -- setup/common.sh@32 -- # continue 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 11:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 11:26:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 11:26:56 -- setup/common.sh@33 -- # echo 0 00:03:26.945 11:26:56 -- setup/common.sh@33 -- # return 0 00:03:26.945 11:26:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.945 11:26:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.945 11:26:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.945 11:26:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.945 11:26:56 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:26.945 node0=512 expecting 512 00:03:26.945 11:26:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.945 11:26:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.945 11:26:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.945 11:26:56 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:26.945 node1=512 expecting 512 00:03:26.945 11:26:56 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:26.945 00:03:26.945 real 0m4.466s 00:03:26.945 user 0m1.631s 00:03:26.945 sys 0m2.919s 00:03:26.945 11:26:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.945 11:26:56 -- common/autotest_common.sh@10 -- # set +x 00:03:26.945 ************************************ 00:03:26.945 END TEST per_node_1G_alloc 00:03:26.945 ************************************ 00:03:26.945 11:26:56 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:26.945 11:26:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:26.945 11:26:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:26.945 11:26:56 -- common/autotest_common.sh@10 -- # set +x 00:03:26.945 ************************************ 00:03:26.945 START TEST even_2G_alloc 00:03:26.945 ************************************ 00:03:26.945 11:26:56 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:26.945 11:26:56 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:26.945 11:26:56 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.945 11:26:56 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.945 11:26:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.945 11:26:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.945 11:26:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.945 11:26:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.945 11:26:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.945 11:26:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.945 11:26:56 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.945 11:26:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.945 11:26:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.945 11:26:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.945 11:26:56 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:26.945 11:26:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.945 11:26:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:26.945 11:26:56 -- setup/hugepages.sh@83 -- # : 512 00:03:26.945 11:26:56 -- setup/hugepages.sh@84 -- # : 1 00:03:26.945 11:26:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.945 11:26:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:26.945 11:26:56 -- setup/hugepages.sh@83 -- # : 0 00:03:26.945 11:26:56 -- setup/hugepages.sh@84 -- # : 0 00:03:26.945 11:26:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.945 11:26:56 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:26.945 11:26:56 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:26.945 11:26:56 -- setup/hugepages.sh@153 -- # setup output 00:03:26.945 11:26:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.945 11:26:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:31.145 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.145 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.145 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.145 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.145 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.145 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.145 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.145 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:31.145 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.145 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.145 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.145 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.145 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.145 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.145 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.145 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:31.145 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:31.145 11:27:00 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:31.145 11:27:00 -- setup/hugepages.sh@89 -- # local node 00:03:31.145 11:27:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:31.145 11:27:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:31.145 11:27:00 -- setup/hugepages.sh@92 -- # local surp 00:03:31.145 11:27:00 -- setup/hugepages.sh@93 -- # local resv 00:03:31.145 11:27:00 -- setup/hugepages.sh@94 -- # local anon 00:03:31.145 11:27:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:31.145 11:27:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:31.145 11:27:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:31.145 11:27:00 -- setup/common.sh@18 -- # local node= 00:03:31.145 11:27:00 -- setup/common.sh@19 -- # local var val 00:03:31.145 11:27:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.145 11:27:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.145 11:27:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.145 11:27:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.145 11:27:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.145 11:27:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38129768 kB' 'MemAvailable: 41780064 kB' 'Buffers: 4096 kB' 'Cached: 16044080 kB' 'SwapCached: 0 kB' 'Active: 13054924 kB' 'Inactive: 3524076 kB' 'Active(anon): 12577528 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534544 kB' 'Mapped: 178468 kB' 'Shmem: 12046704 kB' 'KReclaimable: 291492 kB' 'Slab: 962844 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671352 kB' 'KernelStack: 22608 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14037348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220192 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.145 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.145 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.146 11:27:00 -- setup/common.sh@33 -- # echo 0 00:03:31.146 11:27:00 -- setup/common.sh@33 -- # return 0 00:03:31.146 11:27:00 -- setup/hugepages.sh@97 -- # anon=0 00:03:31.146 11:27:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:31.146 11:27:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.146 11:27:00 -- setup/common.sh@18 -- # local node= 00:03:31.146 11:27:00 -- setup/common.sh@19 -- # local var val 00:03:31.146 11:27:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.146 11:27:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.146 11:27:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.146 11:27:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.146 11:27:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.146 11:27:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38129352 kB' 'MemAvailable: 41779648 kB' 'Buffers: 4096 kB' 'Cached: 16044084 kB' 'SwapCached: 0 kB' 'Active: 13056636 kB' 'Inactive: 3524076 kB' 'Active(anon): 12579240 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536112 kB' 'Mapped: 178624 kB' 'Shmem: 12046708 kB' 'KReclaimable: 291492 kB' 'Slab: 962952 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671460 kB' 'KernelStack: 22560 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14040396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220160 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.146 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.146 11:27:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.147 11:27:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.147 11:27:00 -- setup/common.sh@33 -- # echo 0 00:03:31.147 11:27:00 -- setup/common.sh@33 -- # return 0 00:03:31.147 11:27:00 -- setup/hugepages.sh@99 -- # surp=0 00:03:31.147 11:27:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:31.147 11:27:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:31.147 11:27:00 -- setup/common.sh@18 -- # local node= 00:03:31.147 11:27:00 -- setup/common.sh@19 -- # local var val 00:03:31.147 11:27:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.147 11:27:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.147 11:27:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.147 11:27:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.147 11:27:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.147 11:27:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.147 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38128100 kB' 'MemAvailable: 41778396 kB' 'Buffers: 4096 kB' 'Cached: 16044084 kB' 'SwapCached: 0 kB' 'Active: 13060068 kB' 'Inactive: 3524076 kB' 'Active(anon): 12582672 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539544 kB' 'Mapped: 178624 kB' 'Shmem: 12046708 kB' 'KReclaimable: 291492 kB' 'Slab: 962952 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671460 kB' 'KernelStack: 22560 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14043852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220164 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.148 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.148 11:27:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.149 11:27:00 -- setup/common.sh@33 -- # echo 0 00:03:31.149 11:27:00 -- setup/common.sh@33 -- # return 0 00:03:31.149 11:27:00 -- setup/hugepages.sh@100 -- # resv=0 00:03:31.149 11:27:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:31.149 nr_hugepages=1024 00:03:31.149 11:27:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:31.149 resv_hugepages=0 00:03:31.149 11:27:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:31.149 surplus_hugepages=0 00:03:31.149 11:27:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:31.149 anon_hugepages=0 00:03:31.149 11:27:00 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.149 11:27:00 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:31.149 11:27:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:31.149 11:27:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:31.149 11:27:00 -- setup/common.sh@18 -- # local node= 00:03:31.149 11:27:00 -- setup/common.sh@19 -- # local var val 00:03:31.149 11:27:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.149 11:27:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.149 11:27:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.149 11:27:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.149 11:27:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.149 11:27:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38128572 kB' 'MemAvailable: 41778868 kB' 'Buffers: 4096 kB' 'Cached: 16044112 kB' 'SwapCached: 0 kB' 'Active: 13055012 kB' 'Inactive: 3524076 kB' 'Active(anon): 12577616 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534484 kB' 'Mapped: 178624 kB' 'Shmem: 12046736 kB' 'KReclaimable: 291492 kB' 'Slab: 962960 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671468 kB' 'KernelStack: 22560 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14039104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220176 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.149 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.149 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.150 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.150 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.410 11:27:00 -- setup/common.sh@33 -- # echo 1024 00:03:31.410 11:27:00 -- setup/common.sh@33 -- # return 0 00:03:31.410 11:27:00 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.410 11:27:00 -- setup/hugepages.sh@112 -- # get_nodes 00:03:31.410 11:27:00 -- setup/hugepages.sh@27 -- # local node 00:03:31.410 11:27:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.410 11:27:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:31.410 11:27:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.410 11:27:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:31.410 11:27:00 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.410 11:27:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.410 11:27:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.410 11:27:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.410 11:27:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:31.410 11:27:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.410 11:27:00 -- setup/common.sh@18 -- # local node=0 00:03:31.410 11:27:00 -- setup/common.sh@19 -- # local var val 00:03:31.410 11:27:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.410 11:27:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.410 11:27:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.410 11:27:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.410 11:27:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.410 11:27:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.410 11:27:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21804164 kB' 'MemUsed: 10787920 kB' 'SwapCached: 0 kB' 'Active: 7909456 kB' 'Inactive: 269776 kB' 'Active(anon): 7632148 kB' 'Inactive(anon): 0 kB' 'Active(file): 277308 kB' 'Inactive(file): 269776 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7829092 kB' 'Mapped: 91192 kB' 'AnonPages: 353452 kB' 'Shmem: 7282008 kB' 'KernelStack: 12024 kB' 'PageTables: 5976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 160156 kB' 'Slab: 501492 kB' 'SReclaimable: 160156 kB' 'SUnreclaim: 341336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.410 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.410 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@33 -- # echo 0 00:03:31.411 11:27:00 -- setup/common.sh@33 -- # return 0 00:03:31.411 11:27:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.411 11:27:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.411 11:27:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.411 11:27:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:31.411 11:27:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.411 11:27:00 -- setup/common.sh@18 -- # local node=1 00:03:31.411 11:27:00 -- setup/common.sh@19 -- # local var val 00:03:31.411 11:27:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.411 11:27:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.411 11:27:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:31.411 11:27:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:31.411 11:27:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.411 11:27:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 16324752 kB' 'MemUsed: 11378356 kB' 'SwapCached: 0 kB' 'Active: 5150084 kB' 'Inactive: 3254300 kB' 'Active(anon): 4949996 kB' 'Inactive(anon): 0 kB' 'Active(file): 200088 kB' 'Inactive(file): 3254300 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8219144 kB' 'Mapped: 87584 kB' 'AnonPages: 185464 kB' 'Shmem: 4764756 kB' 'KernelStack: 10536 kB' 'PageTables: 2760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131336 kB' 'Slab: 461468 kB' 'SReclaimable: 131336 kB' 'SUnreclaim: 330132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.411 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.411 11:27:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # continue 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.412 11:27:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.412 11:27:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.412 11:27:00 -- setup/common.sh@33 -- # echo 0 00:03:31.412 11:27:00 -- setup/common.sh@33 -- # return 0 00:03:31.412 11:27:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.412 11:27:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.412 11:27:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.412 11:27:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.412 11:27:00 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:31.412 node0=512 expecting 512 00:03:31.412 11:27:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.412 11:27:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.412 11:27:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.412 11:27:00 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:31.412 node1=512 expecting 512 00:03:31.412 11:27:00 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:31.412 00:03:31.412 real 0m4.434s 00:03:31.412 user 0m1.629s 00:03:31.412 sys 0m2.885s 00:03:31.412 11:27:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.412 11:27:00 -- common/autotest_common.sh@10 -- # set +x 00:03:31.412 ************************************ 00:03:31.412 END TEST even_2G_alloc 00:03:31.412 ************************************ 00:03:31.412 11:27:00 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:31.412 11:27:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:31.412 11:27:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:31.412 11:27:00 -- common/autotest_common.sh@10 -- # set +x 00:03:31.412 ************************************ 00:03:31.412 START TEST odd_alloc 00:03:31.412 ************************************ 00:03:31.412 11:27:00 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:31.412 11:27:00 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:31.412 11:27:00 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:31.412 11:27:00 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:31.412 11:27:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:31.412 11:27:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:31.412 11:27:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:31.412 11:27:00 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:31.412 11:27:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:31.412 11:27:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:31.412 11:27:00 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:31.412 11:27:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:31.412 11:27:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:31.412 11:27:00 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:31.412 11:27:00 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:31.412 11:27:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:31.412 11:27:00 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:31.412 11:27:00 -- setup/hugepages.sh@83 -- # : 513 00:03:31.413 11:27:00 -- setup/hugepages.sh@84 -- # : 1 00:03:31.413 11:27:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:31.413 11:27:00 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:31.413 11:27:00 -- setup/hugepages.sh@83 -- # : 0 00:03:31.413 11:27:00 -- setup/hugepages.sh@84 -- # : 0 00:03:31.413 11:27:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:31.413 11:27:00 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:31.413 11:27:00 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:31.413 11:27:00 -- setup/hugepages.sh@160 -- # setup output 00:03:31.413 11:27:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.413 11:27:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:35.595 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.596 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.596 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.596 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.596 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.596 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.596 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.596 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.596 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.596 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.596 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.596 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.596 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.596 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.596 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.596 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.596 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:35.596 11:27:04 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:35.596 11:27:04 -- setup/hugepages.sh@89 -- # local node 00:03:35.596 11:27:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.596 11:27:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.596 11:27:04 -- setup/hugepages.sh@92 -- # local surp 00:03:35.596 11:27:04 -- setup/hugepages.sh@93 -- # local resv 00:03:35.596 11:27:04 -- setup/hugepages.sh@94 -- # local anon 00:03:35.596 11:27:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.596 11:27:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.596 11:27:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.596 11:27:04 -- setup/common.sh@18 -- # local node= 00:03:35.596 11:27:04 -- setup/common.sh@19 -- # local var val 00:03:35.596 11:27:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.596 11:27:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.596 11:27:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.596 11:27:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.596 11:27:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.596 11:27:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38075912 kB' 'MemAvailable: 41726208 kB' 'Buffers: 4096 kB' 'Cached: 16044232 kB' 'SwapCached: 0 kB' 'Active: 13057036 kB' 'Inactive: 3524076 kB' 'Active(anon): 12579640 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535636 kB' 'Mapped: 178296 kB' 'Shmem: 12046856 kB' 'KReclaimable: 291492 kB' 'Slab: 962288 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 670796 kB' 'KernelStack: 22688 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 14038704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220208 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.596 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.596 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.597 11:27:04 -- setup/common.sh@33 -- # echo 0 00:03:35.597 11:27:04 -- setup/common.sh@33 -- # return 0 00:03:35.597 11:27:04 -- setup/hugepages.sh@97 -- # anon=0 00:03:35.597 11:27:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.597 11:27:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.597 11:27:04 -- setup/common.sh@18 -- # local node= 00:03:35.597 11:27:04 -- setup/common.sh@19 -- # local var val 00:03:35.597 11:27:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.597 11:27:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.597 11:27:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.597 11:27:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.597 11:27:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.597 11:27:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38076520 kB' 'MemAvailable: 41726816 kB' 'Buffers: 4096 kB' 'Cached: 16044232 kB' 'SwapCached: 0 kB' 'Active: 13056736 kB' 'Inactive: 3524076 kB' 'Active(anon): 12579340 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535884 kB' 'Mapped: 178152 kB' 'Shmem: 12046856 kB' 'KReclaimable: 291492 kB' 'Slab: 962216 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 670724 kB' 'KernelStack: 22592 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 14037332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220208 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.597 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.597 11:27:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.598 11:27:04 -- setup/common.sh@33 -- # echo 0 00:03:35.598 11:27:04 -- setup/common.sh@33 -- # return 0 00:03:35.598 11:27:04 -- setup/hugepages.sh@99 -- # surp=0 00:03:35.598 11:27:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.598 11:27:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.598 11:27:04 -- setup/common.sh@18 -- # local node= 00:03:35.598 11:27:04 -- setup/common.sh@19 -- # local var val 00:03:35.598 11:27:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.598 11:27:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.598 11:27:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.598 11:27:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.598 11:27:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.598 11:27:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38078668 kB' 'MemAvailable: 41728964 kB' 'Buffers: 4096 kB' 'Cached: 16044244 kB' 'SwapCached: 0 kB' 'Active: 13056492 kB' 'Inactive: 3524076 kB' 'Active(anon): 12579096 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535568 kB' 'Mapped: 178152 kB' 'Shmem: 12046868 kB' 'KReclaimable: 291492 kB' 'Slab: 962188 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 670696 kB' 'KernelStack: 22640 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 14038960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220240 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.598 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.598 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.599 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.599 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.600 11:27:04 -- setup/common.sh@33 -- # echo 0 00:03:35.600 11:27:04 -- setup/common.sh@33 -- # return 0 00:03:35.600 11:27:04 -- setup/hugepages.sh@100 -- # resv=0 00:03:35.600 11:27:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:35.600 nr_hugepages=1025 00:03:35.600 11:27:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.600 resv_hugepages=0 00:03:35.600 11:27:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.600 surplus_hugepages=0 00:03:35.600 11:27:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.600 anon_hugepages=0 00:03:35.600 11:27:04 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:35.600 11:27:04 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:35.600 11:27:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.600 11:27:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.600 11:27:04 -- setup/common.sh@18 -- # local node= 00:03:35.600 11:27:04 -- setup/common.sh@19 -- # local var val 00:03:35.600 11:27:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.600 11:27:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.600 11:27:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.600 11:27:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.600 11:27:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.600 11:27:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38078736 kB' 'MemAvailable: 41729032 kB' 'Buffers: 4096 kB' 'Cached: 16044244 kB' 'SwapCached: 0 kB' 'Active: 13056476 kB' 'Inactive: 3524076 kB' 'Active(anon): 12579080 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535432 kB' 'Mapped: 178152 kB' 'Shmem: 12046868 kB' 'KReclaimable: 291492 kB' 'Slab: 962188 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 670696 kB' 'KernelStack: 22544 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 14040620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220256 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:04 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.600 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.600 11:27:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.601 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.601 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.601 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.601 11:27:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.601 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.601 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.601 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.601 11:27:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.601 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.601 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.601 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.601 11:27:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.601 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.601 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.601 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.601 11:27:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.601 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.601 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.601 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.601 11:27:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.601 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.601 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.601 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.601 11:27:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.601 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.601 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.601 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.601 11:27:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.601 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.601 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.601 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.601 11:27:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.601 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.601 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.861 11:27:05 -- setup/common.sh@33 -- # echo 1025 00:03:35.861 11:27:05 -- setup/common.sh@33 -- # return 0 00:03:35.861 11:27:05 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:35.861 11:27:05 -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.861 11:27:05 -- setup/hugepages.sh@27 -- # local node 00:03:35.861 11:27:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.861 11:27:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:35.861 11:27:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.861 11:27:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:35.861 11:27:05 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:35.861 11:27:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.861 11:27:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.861 11:27:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.861 11:27:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.861 11:27:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.861 11:27:05 -- setup/common.sh@18 -- # local node=0 00:03:35.861 11:27:05 -- setup/common.sh@19 -- # local var val 00:03:35.861 11:27:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.861 11:27:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.861 11:27:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.861 11:27:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.861 11:27:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.861 11:27:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21791772 kB' 'MemUsed: 10800312 kB' 'SwapCached: 0 kB' 'Active: 7905348 kB' 'Inactive: 269776 kB' 'Active(anon): 7628040 kB' 'Inactive(anon): 0 kB' 'Active(file): 277308 kB' 'Inactive(file): 269776 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7829144 kB' 'Mapped: 91576 kB' 'AnonPages: 349260 kB' 'Shmem: 7282060 kB' 'KernelStack: 12088 kB' 'PageTables: 5784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 160156 kB' 'Slab: 500808 kB' 'SReclaimable: 160156 kB' 'SUnreclaim: 340652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.861 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.861 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@33 -- # echo 0 00:03:35.862 11:27:05 -- setup/common.sh@33 -- # return 0 00:03:35.862 11:27:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.862 11:27:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.862 11:27:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.862 11:27:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:35.862 11:27:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.862 11:27:05 -- setup/common.sh@18 -- # local node=1 00:03:35.862 11:27:05 -- setup/common.sh@19 -- # local var val 00:03:35.862 11:27:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.862 11:27:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.862 11:27:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:35.862 11:27:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:35.862 11:27:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.862 11:27:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 16279128 kB' 'MemUsed: 11423980 kB' 'SwapCached: 0 kB' 'Active: 5156036 kB' 'Inactive: 3254300 kB' 'Active(anon): 4955948 kB' 'Inactive(anon): 0 kB' 'Active(file): 200088 kB' 'Inactive(file): 3254300 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8219240 kB' 'Mapped: 87584 kB' 'AnonPages: 191164 kB' 'Shmem: 4764852 kB' 'KernelStack: 10504 kB' 'PageTables: 2808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131336 kB' 'Slab: 461384 kB' 'SReclaimable: 131336 kB' 'SUnreclaim: 330048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.862 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.862 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # continue 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.863 11:27:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.863 11:27:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.863 11:27:05 -- setup/common.sh@33 -- # echo 0 00:03:35.863 11:27:05 -- setup/common.sh@33 -- # return 0 00:03:35.863 11:27:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.863 11:27:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.863 11:27:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.863 11:27:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.863 11:27:05 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:35.863 node0=512 expecting 513 00:03:35.863 11:27:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.863 11:27:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.863 11:27:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.863 11:27:05 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:35.863 node1=513 expecting 512 00:03:35.863 11:27:05 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:35.863 00:03:35.863 real 0m4.407s 00:03:35.863 user 0m1.664s 00:03:35.863 sys 0m2.829s 00:03:35.863 11:27:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.863 11:27:05 -- common/autotest_common.sh@10 -- # set +x 00:03:35.863 ************************************ 00:03:35.863 END TEST odd_alloc 00:03:35.863 ************************************ 00:03:35.863 11:27:05 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:35.863 11:27:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:35.863 11:27:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:35.863 11:27:05 -- common/autotest_common.sh@10 -- # set +x 00:03:35.863 ************************************ 00:03:35.863 START TEST custom_alloc 00:03:35.863 ************************************ 00:03:35.863 11:27:05 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:35.863 11:27:05 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:35.863 11:27:05 -- setup/hugepages.sh@169 -- # local node 00:03:35.863 11:27:05 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:35.863 11:27:05 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:35.863 11:27:05 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:35.863 11:27:05 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:35.863 11:27:05 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:35.863 11:27:05 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:35.863 11:27:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.863 11:27:05 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:35.863 11:27:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:35.863 11:27:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:35.863 11:27:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.863 11:27:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:35.863 11:27:05 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:35.863 11:27:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.863 11:27:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.863 11:27:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:35.863 11:27:05 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:35.863 11:27:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.863 11:27:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:35.863 11:27:05 -- setup/hugepages.sh@83 -- # : 256 00:03:35.863 11:27:05 -- setup/hugepages.sh@84 -- # : 1 00:03:35.863 11:27:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.863 11:27:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:35.863 11:27:05 -- setup/hugepages.sh@83 -- # : 0 00:03:35.863 11:27:05 -- setup/hugepages.sh@84 -- # : 0 00:03:35.863 11:27:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.863 11:27:05 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:35.863 11:27:05 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:35.863 11:27:05 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:35.863 11:27:05 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:35.863 11:27:05 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:35.863 11:27:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.863 11:27:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:35.863 11:27:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:35.863 11:27:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:35.864 11:27:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.864 11:27:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:35.864 11:27:05 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:35.864 11:27:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.864 11:27:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.864 11:27:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:35.864 11:27:05 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:35.864 11:27:05 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:35.864 11:27:05 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:35.864 11:27:05 -- setup/hugepages.sh@78 -- # return 0 00:03:35.864 11:27:05 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:35.864 11:27:05 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:35.864 11:27:05 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:35.864 11:27:05 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:35.864 11:27:05 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:35.864 11:27:05 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:35.864 11:27:05 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:35.864 11:27:05 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:35.864 11:27:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:35.864 11:27:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.864 11:27:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:35.864 11:27:05 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:35.864 11:27:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.864 11:27:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.864 11:27:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:35.864 11:27:05 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:35.864 11:27:05 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:35.864 11:27:05 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:35.864 11:27:05 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:35.864 11:27:05 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:35.864 11:27:05 -- setup/hugepages.sh@78 -- # return 0 00:03:35.864 11:27:05 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:35.864 11:27:05 -- setup/hugepages.sh@187 -- # setup output 00:03:35.864 11:27:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.864 11:27:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:40.052 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.052 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.052 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.052 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.052 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.052 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.052 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.052 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.052 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.052 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.052 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.052 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.052 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.052 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.052 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.052 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.052 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:40.052 11:27:09 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:40.052 11:27:09 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:40.052 11:27:09 -- setup/hugepages.sh@89 -- # local node 00:03:40.052 11:27:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.052 11:27:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.052 11:27:09 -- setup/hugepages.sh@92 -- # local surp 00:03:40.052 11:27:09 -- setup/hugepages.sh@93 -- # local resv 00:03:40.052 11:27:09 -- setup/hugepages.sh@94 -- # local anon 00:03:40.052 11:27:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.052 11:27:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.052 11:27:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.052 11:27:09 -- setup/common.sh@18 -- # local node= 00:03:40.052 11:27:09 -- setup/common.sh@19 -- # local var val 00:03:40.052 11:27:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.052 11:27:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.052 11:27:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.052 11:27:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.052 11:27:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.052 11:27:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 37017064 kB' 'MemAvailable: 40667360 kB' 'Buffers: 4096 kB' 'Cached: 16044376 kB' 'SwapCached: 0 kB' 'Active: 13054488 kB' 'Inactive: 3524076 kB' 'Active(anon): 12577092 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533352 kB' 'Mapped: 178172 kB' 'Shmem: 12047000 kB' 'KReclaimable: 291492 kB' 'Slab: 962568 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671076 kB' 'KernelStack: 22448 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 14034788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220096 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.052 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.052 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.053 11:27:09 -- setup/common.sh@33 -- # echo 0 00:03:40.053 11:27:09 -- setup/common.sh@33 -- # return 0 00:03:40.053 11:27:09 -- setup/hugepages.sh@97 -- # anon=0 00:03:40.053 11:27:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.053 11:27:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.053 11:27:09 -- setup/common.sh@18 -- # local node= 00:03:40.053 11:27:09 -- setup/common.sh@19 -- # local var val 00:03:40.053 11:27:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.053 11:27:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.053 11:27:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.053 11:27:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.053 11:27:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.053 11:27:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 37016560 kB' 'MemAvailable: 40666856 kB' 'Buffers: 4096 kB' 'Cached: 16044380 kB' 'SwapCached: 0 kB' 'Active: 13054032 kB' 'Inactive: 3524076 kB' 'Active(anon): 12576636 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532956 kB' 'Mapped: 178172 kB' 'Shmem: 12047004 kB' 'KReclaimable: 291492 kB' 'Slab: 962564 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671072 kB' 'KernelStack: 22432 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 14034800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220032 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.053 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.053 11:27:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.054 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.054 11:27:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.054 11:27:09 -- setup/common.sh@33 -- # echo 0 00:03:40.054 11:27:09 -- setup/common.sh@33 -- # return 0 00:03:40.054 11:27:09 -- setup/hugepages.sh@99 -- # surp=0 00:03:40.054 11:27:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.054 11:27:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.054 11:27:09 -- setup/common.sh@18 -- # local node= 00:03:40.054 11:27:09 -- setup/common.sh@19 -- # local var val 00:03:40.054 11:27:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.054 11:27:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.054 11:27:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.054 11:27:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.054 11:27:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.054 11:27:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 37013436 kB' 'MemAvailable: 40663732 kB' 'Buffers: 4096 kB' 'Cached: 16044392 kB' 'SwapCached: 0 kB' 'Active: 13056036 kB' 'Inactive: 3524076 kB' 'Active(anon): 12578640 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535028 kB' 'Mapped: 178676 kB' 'Shmem: 12047016 kB' 'KReclaimable: 291492 kB' 'Slab: 962620 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671128 kB' 'KernelStack: 22448 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 14037884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220000 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.055 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.056 11:27:09 -- setup/common.sh@33 -- # echo 0 00:03:40.056 11:27:09 -- setup/common.sh@33 -- # return 0 00:03:40.056 11:27:09 -- setup/hugepages.sh@100 -- # resv=0 00:03:40.056 11:27:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:40.056 nr_hugepages=1536 00:03:40.056 11:27:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.056 resv_hugepages=0 00:03:40.056 11:27:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.056 surplus_hugepages=0 00:03:40.056 11:27:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.056 anon_hugepages=0 00:03:40.056 11:27:09 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:40.056 11:27:09 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:40.056 11:27:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.056 11:27:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.056 11:27:09 -- setup/common.sh@18 -- # local node= 00:03:40.056 11:27:09 -- setup/common.sh@19 -- # local var val 00:03:40.056 11:27:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.056 11:27:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.056 11:27:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.056 11:27:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.056 11:27:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.056 11:27:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 37009548 kB' 'MemAvailable: 40659844 kB' 'Buffers: 4096 kB' 'Cached: 16044408 kB' 'SwapCached: 0 kB' 'Active: 13054028 kB' 'Inactive: 3524076 kB' 'Active(anon): 12576632 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533008 kB' 'Mapped: 178172 kB' 'Shmem: 12047032 kB' 'KReclaimable: 291492 kB' 'Slab: 962620 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671128 kB' 'KernelStack: 22448 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 14034688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220016 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.056 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:27:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.057 11:27:09 -- setup/common.sh@33 -- # echo 1536 00:03:40.057 11:27:09 -- setup/common.sh@33 -- # return 0 00:03:40.057 11:27:09 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:40.057 11:27:09 -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.057 11:27:09 -- setup/hugepages.sh@27 -- # local node 00:03:40.057 11:27:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.057 11:27:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.057 11:27:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.057 11:27:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:40.057 11:27:09 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.057 11:27:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.057 11:27:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.057 11:27:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.057 11:27:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.057 11:27:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.058 11:27:09 -- setup/common.sh@18 -- # local node=0 00:03:40.058 11:27:09 -- setup/common.sh@19 -- # local var val 00:03:40.058 11:27:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.058 11:27:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.058 11:27:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.058 11:27:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.058 11:27:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.058 11:27:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21760532 kB' 'MemUsed: 10831552 kB' 'SwapCached: 0 kB' 'Active: 7908576 kB' 'Inactive: 269776 kB' 'Active(anon): 7631268 kB' 'Inactive(anon): 0 kB' 'Active(file): 277308 kB' 'Inactive(file): 269776 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7829164 kB' 'Mapped: 91244 kB' 'AnonPages: 352408 kB' 'Shmem: 7282080 kB' 'KernelStack: 11896 kB' 'PageTables: 5648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 160156 kB' 'Slab: 501280 kB' 'SReclaimable: 160156 kB' 'SUnreclaim: 341124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:27:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:27:09 -- setup/common.sh@33 -- # echo 0 00:03:40.058 11:27:09 -- setup/common.sh@33 -- # return 0 00:03:40.058 11:27:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.058 11:27:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.058 11:27:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.058 11:27:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:40.059 11:27:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.059 11:27:09 -- setup/common.sh@18 -- # local node=1 00:03:40.059 11:27:09 -- setup/common.sh@19 -- # local var val 00:03:40.059 11:27:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.059 11:27:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.059 11:27:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:40.059 11:27:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:40.059 11:27:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.059 11:27:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 15240876 kB' 'MemUsed: 12462232 kB' 'SwapCached: 0 kB' 'Active: 5150944 kB' 'Inactive: 3254300 kB' 'Active(anon): 4950856 kB' 'Inactive(anon): 0 kB' 'Active(file): 200088 kB' 'Inactive(file): 3254300 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8219368 kB' 'Mapped: 87080 kB' 'AnonPages: 186104 kB' 'Shmem: 4764980 kB' 'KernelStack: 10536 kB' 'PageTables: 2756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131336 kB' 'Slab: 461332 kB' 'SReclaimable: 131336 kB' 'SUnreclaim: 329996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # continue 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:27:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:27:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.059 11:27:09 -- setup/common.sh@33 -- # echo 0 00:03:40.059 11:27:09 -- setup/common.sh@33 -- # return 0 00:03:40.059 11:27:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.059 11:27:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.060 11:27:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.060 11:27:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.060 11:27:09 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:40.060 node0=512 expecting 512 00:03:40.060 11:27:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.060 11:27:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.060 11:27:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.060 11:27:09 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:40.060 node1=1024 expecting 1024 00:03:40.060 11:27:09 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:40.060 00:03:40.060 real 0m4.303s 00:03:40.060 user 0m1.651s 00:03:40.060 sys 0m2.735s 00:03:40.060 11:27:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.060 11:27:09 -- common/autotest_common.sh@10 -- # set +x 00:03:40.060 ************************************ 00:03:40.060 END TEST custom_alloc 00:03:40.060 ************************************ 00:03:40.319 11:27:09 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:40.319 11:27:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:40.319 11:27:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:40.319 11:27:09 -- common/autotest_common.sh@10 -- # set +x 00:03:40.319 ************************************ 00:03:40.319 START TEST no_shrink_alloc 00:03:40.319 ************************************ 00:03:40.319 11:27:09 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:40.319 11:27:09 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:40.319 11:27:09 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.319 11:27:09 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:40.319 11:27:09 -- setup/hugepages.sh@51 -- # shift 00:03:40.319 11:27:09 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:40.319 11:27:09 -- setup/hugepages.sh@52 -- # local node_ids 00:03:40.319 11:27:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.319 11:27:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.319 11:27:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:40.319 11:27:09 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:40.319 11:27:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.319 11:27:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.319 11:27:09 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.319 11:27:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.319 11:27:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.319 11:27:09 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:40.319 11:27:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:40.319 11:27:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:40.319 11:27:09 -- setup/hugepages.sh@73 -- # return 0 00:03:40.319 11:27:09 -- setup/hugepages.sh@198 -- # setup output 00:03:40.319 11:27:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.319 11:27:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:44.503 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.503 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.503 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.503 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.503 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.503 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.503 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.503 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.503 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.503 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.503 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.503 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.503 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.503 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.503 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.503 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.503 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:44.503 11:27:13 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:44.503 11:27:13 -- setup/hugepages.sh@89 -- # local node 00:03:44.503 11:27:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.503 11:27:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.503 11:27:13 -- setup/hugepages.sh@92 -- # local surp 00:03:44.503 11:27:13 -- setup/hugepages.sh@93 -- # local resv 00:03:44.503 11:27:13 -- setup/hugepages.sh@94 -- # local anon 00:03:44.503 11:27:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.503 11:27:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.503 11:27:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.503 11:27:13 -- setup/common.sh@18 -- # local node= 00:03:44.503 11:27:13 -- setup/common.sh@19 -- # local var val 00:03:44.503 11:27:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.503 11:27:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.503 11:27:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.503 11:27:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.503 11:27:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.503 11:27:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38063668 kB' 'MemAvailable: 41713968 kB' 'Buffers: 4096 kB' 'Cached: 16044512 kB' 'SwapCached: 0 kB' 'Active: 13057980 kB' 'Inactive: 3524076 kB' 'Active(anon): 12580584 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536708 kB' 'Mapped: 178208 kB' 'Shmem: 12047136 kB' 'KReclaimable: 291500 kB' 'Slab: 963328 kB' 'SReclaimable: 291500 kB' 'SUnreclaim: 671828 kB' 'KernelStack: 22576 kB' 'PageTables: 9372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14073720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220256 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.503 11:27:13 -- setup/common.sh@33 -- # echo 0 00:03:44.503 11:27:13 -- setup/common.sh@33 -- # return 0 00:03:44.503 11:27:13 -- setup/hugepages.sh@97 -- # anon=0 00:03:44.503 11:27:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.503 11:27:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.503 11:27:13 -- setup/common.sh@18 -- # local node= 00:03:44.503 11:27:13 -- setup/common.sh@19 -- # local var val 00:03:44.503 11:27:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.503 11:27:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.503 11:27:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.503 11:27:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.503 11:27:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.503 11:27:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38067312 kB' 'MemAvailable: 41717612 kB' 'Buffers: 4096 kB' 'Cached: 16044512 kB' 'SwapCached: 0 kB' 'Active: 13058724 kB' 'Inactive: 3524076 kB' 'Active(anon): 12581328 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537400 kB' 'Mapped: 178216 kB' 'Shmem: 12047136 kB' 'KReclaimable: 291500 kB' 'Slab: 963328 kB' 'SReclaimable: 291500 kB' 'SUnreclaim: 671828 kB' 'KernelStack: 22576 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14039800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220224 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.503 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.503 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.504 11:27:13 -- setup/common.sh@33 -- # echo 0 00:03:44.504 11:27:13 -- setup/common.sh@33 -- # return 0 00:03:44.504 11:27:13 -- setup/hugepages.sh@99 -- # surp=0 00:03:44.504 11:27:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.504 11:27:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.504 11:27:13 -- setup/common.sh@18 -- # local node= 00:03:44.504 11:27:13 -- setup/common.sh@19 -- # local var val 00:03:44.504 11:27:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.504 11:27:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.504 11:27:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.504 11:27:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.504 11:27:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.504 11:27:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38066928 kB' 'MemAvailable: 41717224 kB' 'Buffers: 4096 kB' 'Cached: 16044524 kB' 'SwapCached: 0 kB' 'Active: 13057416 kB' 'Inactive: 3524076 kB' 'Active(anon): 12580020 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536008 kB' 'Mapped: 178148 kB' 'Shmem: 12047148 kB' 'KReclaimable: 291492 kB' 'Slab: 963440 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671948 kB' 'KernelStack: 22608 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14039696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220304 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.504 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.504 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.505 11:27:13 -- setup/common.sh@33 -- # echo 0 00:03:44.505 11:27:13 -- setup/common.sh@33 -- # return 0 00:03:44.505 11:27:13 -- setup/hugepages.sh@100 -- # resv=0 00:03:44.505 11:27:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.505 nr_hugepages=1024 00:03:44.505 11:27:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.505 resv_hugepages=0 00:03:44.505 11:27:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.505 surplus_hugepages=0 00:03:44.505 11:27:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.505 anon_hugepages=0 00:03:44.505 11:27:13 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.505 11:27:13 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.505 11:27:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.505 11:27:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.505 11:27:13 -- setup/common.sh@18 -- # local node= 00:03:44.505 11:27:13 -- setup/common.sh@19 -- # local var val 00:03:44.505 11:27:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.505 11:27:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.505 11:27:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.505 11:27:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.505 11:27:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.505 11:27:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38066408 kB' 'MemAvailable: 41716704 kB' 'Buffers: 4096 kB' 'Cached: 16044528 kB' 'SwapCached: 0 kB' 'Active: 13057028 kB' 'Inactive: 3524076 kB' 'Active(anon): 12579632 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535692 kB' 'Mapped: 178140 kB' 'Shmem: 12047152 kB' 'KReclaimable: 291492 kB' 'Slab: 963432 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 671940 kB' 'KernelStack: 22576 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14039964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220304 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.505 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.505 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.506 11:27:13 -- setup/common.sh@33 -- # echo 1024 00:03:44.506 11:27:13 -- setup/common.sh@33 -- # return 0 00:03:44.506 11:27:13 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.506 11:27:13 -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.506 11:27:13 -- setup/hugepages.sh@27 -- # local node 00:03:44.506 11:27:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.506 11:27:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.506 11:27:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.506 11:27:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:44.506 11:27:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.506 11:27:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.506 11:27:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.506 11:27:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.506 11:27:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.506 11:27:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.506 11:27:13 -- setup/common.sh@18 -- # local node=0 00:03:44.506 11:27:13 -- setup/common.sh@19 -- # local var val 00:03:44.506 11:27:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.506 11:27:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.506 11:27:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.506 11:27:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.506 11:27:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.506 11:27:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.506 11:27:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 20730496 kB' 'MemUsed: 11861588 kB' 'SwapCached: 0 kB' 'Active: 7903948 kB' 'Inactive: 269776 kB' 'Active(anon): 7626640 kB' 'Inactive(anon): 0 kB' 'Active(file): 277308 kB' 'Inactive(file): 269776 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7829208 kB' 'Mapped: 91060 kB' 'AnonPages: 347692 kB' 'Shmem: 7282124 kB' 'KernelStack: 11960 kB' 'PageTables: 5948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 160156 kB' 'Slab: 501744 kB' 'SReclaimable: 160156 kB' 'SUnreclaim: 341588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # continue 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.506 11:27:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.506 11:27:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.506 11:27:13 -- setup/common.sh@33 -- # echo 0 00:03:44.506 11:27:13 -- setup/common.sh@33 -- # return 0 00:03:44.506 11:27:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.506 11:27:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.506 11:27:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.506 11:27:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.506 11:27:13 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.506 node0=1024 expecting 1024 00:03:44.506 11:27:13 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.506 11:27:13 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:44.506 11:27:13 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:44.506 11:27:13 -- setup/hugepages.sh@202 -- # setup output 00:03:44.506 11:27:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.506 11:27:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:48.691 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.691 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.691 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.691 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.691 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.691 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.691 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.691 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.691 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.691 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.692 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.692 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.692 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.692 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.692 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.692 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.692 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:48.692 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:48.692 11:27:17 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:48.692 11:27:17 -- setup/hugepages.sh@89 -- # local node 00:03:48.692 11:27:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.692 11:27:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.692 11:27:17 -- setup/hugepages.sh@92 -- # local surp 00:03:48.692 11:27:17 -- setup/hugepages.sh@93 -- # local resv 00:03:48.692 11:27:17 -- setup/hugepages.sh@94 -- # local anon 00:03:48.692 11:27:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.692 11:27:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.692 11:27:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.692 11:27:17 -- setup/common.sh@18 -- # local node= 00:03:48.692 11:27:17 -- setup/common.sh@19 -- # local var val 00:03:48.692 11:27:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.692 11:27:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.692 11:27:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.692 11:27:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.692 11:27:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.692 11:27:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38094436 kB' 'MemAvailable: 41744732 kB' 'Buffers: 4096 kB' 'Cached: 16044636 kB' 'SwapCached: 0 kB' 'Active: 13058968 kB' 'Inactive: 3524076 kB' 'Active(anon): 12581572 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537840 kB' 'Mapped: 178176 kB' 'Shmem: 12047260 kB' 'KReclaimable: 291492 kB' 'Slab: 964196 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 672704 kB' 'KernelStack: 22576 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14041440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220272 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.692 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.692 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.693 11:27:17 -- setup/common.sh@33 -- # echo 0 00:03:48.693 11:27:17 -- setup/common.sh@33 -- # return 0 00:03:48.693 11:27:17 -- setup/hugepages.sh@97 -- # anon=0 00:03:48.693 11:27:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.693 11:27:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.693 11:27:17 -- setup/common.sh@18 -- # local node= 00:03:48.693 11:27:17 -- setup/common.sh@19 -- # local var val 00:03:48.693 11:27:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.693 11:27:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.693 11:27:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.693 11:27:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.693 11:27:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.693 11:27:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38095308 kB' 'MemAvailable: 41745604 kB' 'Buffers: 4096 kB' 'Cached: 16044636 kB' 'SwapCached: 0 kB' 'Active: 13057324 kB' 'Inactive: 3524076 kB' 'Active(anon): 12579928 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535924 kB' 'Mapped: 178176 kB' 'Shmem: 12047260 kB' 'KReclaimable: 291492 kB' 'Slab: 964284 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 672792 kB' 'KernelStack: 22512 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14041452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220192 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 11:27:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.694 11:27:17 -- setup/common.sh@33 -- # echo 0 00:03:48.694 11:27:17 -- setup/common.sh@33 -- # return 0 00:03:48.694 11:27:17 -- setup/hugepages.sh@99 -- # surp=0 00:03:48.694 11:27:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.694 11:27:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.694 11:27:17 -- setup/common.sh@18 -- # local node= 00:03:48.694 11:27:17 -- setup/common.sh@19 -- # local var val 00:03:48.694 11:27:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.694 11:27:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.694 11:27:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.694 11:27:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.694 11:27:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.694 11:27:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38095408 kB' 'MemAvailable: 41745704 kB' 'Buffers: 4096 kB' 'Cached: 16044636 kB' 'SwapCached: 0 kB' 'Active: 13057580 kB' 'Inactive: 3524076 kB' 'Active(anon): 12580184 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536180 kB' 'Mapped: 178176 kB' 'Shmem: 12047260 kB' 'KReclaimable: 291492 kB' 'Slab: 964284 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 672792 kB' 'KernelStack: 22528 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14041464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220224 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.694 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.694 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.695 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.695 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.696 11:27:17 -- setup/common.sh@33 -- # echo 0 00:03:48.696 11:27:17 -- setup/common.sh@33 -- # return 0 00:03:48.696 11:27:17 -- setup/hugepages.sh@100 -- # resv=0 00:03:48.696 11:27:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.696 nr_hugepages=1024 00:03:48.696 11:27:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.696 resv_hugepages=0 00:03:48.696 11:27:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.696 surplus_hugepages=0 00:03:48.696 11:27:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.696 anon_hugepages=0 00:03:48.696 11:27:17 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.696 11:27:17 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.696 11:27:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.696 11:27:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.696 11:27:17 -- setup/common.sh@18 -- # local node= 00:03:48.696 11:27:17 -- setup/common.sh@19 -- # local var val 00:03:48.696 11:27:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.696 11:27:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.696 11:27:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.696 11:27:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.696 11:27:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.696 11:27:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38096408 kB' 'MemAvailable: 41746704 kB' 'Buffers: 4096 kB' 'Cached: 16044636 kB' 'SwapCached: 0 kB' 'Active: 13058124 kB' 'Inactive: 3524076 kB' 'Active(anon): 12580728 kB' 'Inactive(anon): 0 kB' 'Active(file): 477396 kB' 'Inactive(file): 3524076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536792 kB' 'Mapped: 178176 kB' 'Shmem: 12047260 kB' 'KReclaimable: 291492 kB' 'Slab: 964316 kB' 'SReclaimable: 291492 kB' 'SUnreclaim: 672824 kB' 'KernelStack: 22448 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14041480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220192 kB' 'VmallocChunk: 0 kB' 'Percpu: 92288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4085108 kB' 'DirectMap2M: 36495360 kB' 'DirectMap1G: 28311552 kB' 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.696 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.696 11:27:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.697 11:27:17 -- setup/common.sh@33 -- # echo 1024 00:03:48.697 11:27:17 -- setup/common.sh@33 -- # return 0 00:03:48.697 11:27:17 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.697 11:27:17 -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.697 11:27:17 -- setup/hugepages.sh@27 -- # local node 00:03:48.697 11:27:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.697 11:27:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:48.697 11:27:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.697 11:27:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:48.697 11:27:17 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.697 11:27:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.697 11:27:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.697 11:27:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.697 11:27:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.697 11:27:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.697 11:27:17 -- setup/common.sh@18 -- # local node=0 00:03:48.697 11:27:17 -- setup/common.sh@19 -- # local var val 00:03:48.697 11:27:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.697 11:27:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.697 11:27:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.697 11:27:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.697 11:27:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.697 11:27:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.697 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 20742168 kB' 'MemUsed: 11849916 kB' 'SwapCached: 0 kB' 'Active: 7906516 kB' 'Inactive: 269776 kB' 'Active(anon): 7629208 kB' 'Inactive(anon): 0 kB' 'Active(file): 277308 kB' 'Inactive(file): 269776 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7829280 kB' 'Mapped: 91096 kB' 'AnonPages: 350284 kB' 'Shmem: 7282196 kB' 'KernelStack: 12008 kB' 'PageTables: 5928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 160156 kB' 'Slab: 502396 kB' 'SReclaimable: 160156 kB' 'SUnreclaim: 342240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:48.697 11:27:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.697 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.697 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.697 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.697 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.697 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.697 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.697 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.697 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.697 11:27:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.697 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.697 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # continue 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.698 11:27:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.698 11:27:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.698 11:27:18 -- setup/common.sh@33 -- # echo 0 00:03:48.698 11:27:18 -- setup/common.sh@33 -- # return 0 00:03:48.698 11:27:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.698 11:27:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.698 11:27:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.698 11:27:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.698 11:27:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:48.698 node0=1024 expecting 1024 00:03:48.698 11:27:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:48.698 00:03:48.698 real 0m8.517s 00:03:48.698 user 0m3.100s 00:03:48.698 sys 0m5.582s 00:03:48.698 11:27:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.698 11:27:18 -- common/autotest_common.sh@10 -- # set +x 00:03:48.698 ************************************ 00:03:48.698 END TEST no_shrink_alloc 00:03:48.698 ************************************ 00:03:48.698 11:27:18 -- setup/hugepages.sh@217 -- # clear_hp 00:03:48.698 11:27:18 -- setup/hugepages.sh@37 -- # local node hp 00:03:48.698 11:27:18 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.698 11:27:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.698 11:27:18 -- setup/hugepages.sh@41 -- # echo 0 00:03:48.698 11:27:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.698 11:27:18 -- setup/hugepages.sh@41 -- # echo 0 00:03:48.698 11:27:18 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.698 11:27:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.698 11:27:18 -- setup/hugepages.sh@41 -- # echo 0 00:03:48.698 11:27:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.698 11:27:18 -- setup/hugepages.sh@41 -- # echo 0 00:03:48.698 11:27:18 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:48.698 11:27:18 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:48.698 00:03:48.698 real 0m33.074s 00:03:48.698 user 0m11.480s 00:03:48.698 sys 0m20.263s 00:03:48.698 11:27:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.698 11:27:18 -- common/autotest_common.sh@10 -- # set +x 00:03:48.698 ************************************ 00:03:48.698 END TEST hugepages 00:03:48.698 ************************************ 00:03:48.698 11:27:18 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:48.698 11:27:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:48.698 11:27:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:48.698 11:27:18 -- common/autotest_common.sh@10 -- # set +x 00:03:48.698 ************************************ 00:03:48.698 START TEST driver 00:03:48.698 ************************************ 00:03:48.698 11:27:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:48.958 * Looking for test storage... 00:03:48.958 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:48.958 11:27:18 -- setup/driver.sh@68 -- # setup reset 00:03:48.958 11:27:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.958 11:27:18 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.587 11:27:23 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:55.587 11:27:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:55.587 11:27:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:55.587 11:27:23 -- common/autotest_common.sh@10 -- # set +x 00:03:55.587 ************************************ 00:03:55.587 START TEST guess_driver 00:03:55.587 ************************************ 00:03:55.587 11:27:23 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:55.587 11:27:23 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:55.587 11:27:23 -- setup/driver.sh@47 -- # local fail=0 00:03:55.587 11:27:23 -- setup/driver.sh@49 -- # pick_driver 00:03:55.587 11:27:23 -- setup/driver.sh@36 -- # vfio 00:03:55.587 11:27:23 -- setup/driver.sh@21 -- # local iommu_grups 00:03:55.587 11:27:23 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:55.587 11:27:23 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:55.587 11:27:23 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:55.587 11:27:23 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:55.587 11:27:23 -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:03:55.587 11:27:23 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:55.587 11:27:23 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:55.587 11:27:23 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:55.587 11:27:23 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:55.587 11:27:23 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:55.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:55.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:55.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:55.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:55.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:55.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:55.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:55.587 11:27:23 -- setup/driver.sh@30 -- # return 0 00:03:55.587 11:27:23 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:55.587 11:27:23 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:55.587 11:27:23 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:55.587 11:27:23 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:55.587 Looking for driver=vfio-pci 00:03:55.587 11:27:23 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.587 11:27:23 -- setup/driver.sh@45 -- # setup output config 00:03:55.587 11:27:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.587 11:27:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:58.864 11:27:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.864 11:27:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.864 11:27:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.864 11:27:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.864 11:27:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.864 11:27:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.864 11:27:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.864 11:27:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.864 11:27:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.864 11:27:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.864 11:27:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.864 11:27:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.864 11:27:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.864 11:27:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.864 11:27:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.864 11:27:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.864 11:27:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.864 11:27:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.864 11:27:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.864 11:27:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.864 11:27:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.864 11:27:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.864 11:27:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.864 11:27:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.864 11:27:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.864 11:27:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.864 11:27:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.864 11:27:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.864 11:27:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.864 11:27:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.864 11:27:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.864 11:27:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.864 11:27:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.864 11:27:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.864 11:27:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.864 11:27:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.864 11:27:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.864 11:27:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.864 11:27:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.864 11:27:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.864 11:27:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.864 11:27:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.864 11:27:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.864 11:27:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.864 11:27:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.864 11:27:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.864 11:27:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.864 11:27:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.761 11:27:30 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.761 11:27:30 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.761 11:27:30 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.761 11:27:30 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:00.761 11:27:30 -- setup/driver.sh@65 -- # setup reset 00:04:00.761 11:27:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.761 11:27:30 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.311 00:04:07.312 real 0m11.744s 00:04:07.312 user 0m3.125s 00:04:07.312 sys 0m5.977s 00:04:07.312 11:27:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.312 11:27:35 -- common/autotest_common.sh@10 -- # set +x 00:04:07.312 ************************************ 00:04:07.312 END TEST guess_driver 00:04:07.312 ************************************ 00:04:07.312 00:04:07.312 real 0m17.512s 00:04:07.312 user 0m4.860s 00:04:07.312 sys 0m9.257s 00:04:07.312 11:27:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.312 11:27:35 -- common/autotest_common.sh@10 -- # set +x 00:04:07.312 ************************************ 00:04:07.312 END TEST driver 00:04:07.312 ************************************ 00:04:07.312 11:27:35 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:07.312 11:27:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:07.312 11:27:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.312 11:27:35 -- common/autotest_common.sh@10 -- # set +x 00:04:07.312 ************************************ 00:04:07.312 START TEST devices 00:04:07.312 ************************************ 00:04:07.312 11:27:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:07.312 * Looking for test storage... 00:04:07.312 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:07.312 11:27:35 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:07.312 11:27:35 -- setup/devices.sh@192 -- # setup reset 00:04:07.312 11:27:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.312 11:27:35 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:11.495 11:27:40 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:11.495 11:27:40 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:11.495 11:27:40 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:11.495 11:27:40 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:11.495 11:27:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:11.495 11:27:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:11.495 11:27:40 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:11.495 11:27:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:11.495 11:27:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:11.495 11:27:40 -- setup/devices.sh@196 -- # blocks=() 00:04:11.495 11:27:40 -- setup/devices.sh@196 -- # declare -a blocks 00:04:11.495 11:27:40 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:11.495 11:27:40 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:11.495 11:27:40 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:11.495 11:27:40 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:11.495 11:27:40 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:11.495 11:27:40 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:11.495 11:27:40 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:11.495 11:27:40 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:11.495 11:27:40 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:11.495 11:27:40 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:11.495 11:27:40 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:11.495 No valid GPT data, bailing 00:04:11.495 11:27:40 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:11.495 11:27:40 -- scripts/common.sh@393 -- # pt= 00:04:11.495 11:27:40 -- scripts/common.sh@394 -- # return 1 00:04:11.495 11:27:40 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:11.495 11:27:40 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:11.495 11:27:40 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:11.495 11:27:40 -- setup/common.sh@80 -- # echo 2000398934016 00:04:11.495 11:27:40 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:11.495 11:27:40 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:11.495 11:27:40 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:11.495 11:27:40 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:11.495 11:27:40 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:11.495 11:27:40 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:11.495 11:27:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.495 11:27:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.495 11:27:40 -- common/autotest_common.sh@10 -- # set +x 00:04:11.495 ************************************ 00:04:11.495 START TEST nvme_mount 00:04:11.495 ************************************ 00:04:11.495 11:27:40 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:11.495 11:27:40 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:11.495 11:27:40 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:11.495 11:27:40 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.495 11:27:40 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:11.495 11:27:40 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:11.495 11:27:40 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:11.495 11:27:40 -- setup/common.sh@40 -- # local part_no=1 00:04:11.495 11:27:40 -- setup/common.sh@41 -- # local size=1073741824 00:04:11.495 11:27:40 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:11.495 11:27:40 -- setup/common.sh@44 -- # parts=() 00:04:11.495 11:27:40 -- setup/common.sh@44 -- # local parts 00:04:11.495 11:27:40 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:11.495 11:27:40 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:11.495 11:27:40 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:11.495 11:27:40 -- setup/common.sh@46 -- # (( part++ )) 00:04:11.495 11:27:40 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:11.495 11:27:40 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:11.495 11:27:40 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:11.495 11:27:40 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:12.060 Creating new GPT entries in memory. 00:04:12.060 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:12.060 other utilities. 00:04:12.060 11:27:41 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:12.060 11:27:41 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.060 11:27:41 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:12.061 11:27:41 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:12.061 11:27:41 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:12.994 Creating new GPT entries in memory. 00:04:12.994 The operation has completed successfully. 00:04:12.994 11:27:42 -- setup/common.sh@57 -- # (( part++ )) 00:04:12.994 11:27:42 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.994 11:27:42 -- setup/common.sh@62 -- # wait 2138063 00:04:12.994 11:27:42 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.994 11:27:42 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:12.994 11:27:42 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.994 11:27:42 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:12.994 11:27:42 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:12.994 11:27:42 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.994 11:27:42 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.994 11:27:42 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:12.994 11:27:42 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:12.994 11:27:42 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.994 11:27:42 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.994 11:27:42 -- setup/devices.sh@53 -- # local found=0 00:04:12.994 11:27:42 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.994 11:27:42 -- setup/devices.sh@56 -- # : 00:04:12.994 11:27:42 -- setup/devices.sh@59 -- # local pci status 00:04:12.994 11:27:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.995 11:27:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:12.995 11:27:42 -- setup/devices.sh@47 -- # setup output config 00:04:12.995 11:27:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.995 11:27:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:17.176 11:27:46 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:17.176 11:27:46 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:17.176 11:27:46 -- setup/devices.sh@63 -- # found=1 00:04:17.176 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.176 11:27:46 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:17.176 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.176 11:27:46 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:17.176 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.176 11:27:46 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:17.176 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.176 11:27:46 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:17.177 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.177 11:27:46 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:17.177 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.177 11:27:46 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:17.177 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.177 11:27:46 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:17.177 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.177 11:27:46 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:17.177 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.177 11:27:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:17.177 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.177 11:27:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:17.177 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.177 11:27:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:17.177 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.177 11:27:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:17.177 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.177 11:27:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:17.177 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.177 11:27:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:17.177 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.177 11:27:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:17.177 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.177 11:27:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:17.177 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.177 11:27:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:17.177 11:27:46 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:17.177 11:27:46 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.177 11:27:46 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:17.177 11:27:46 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.177 11:27:46 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:17.177 11:27:46 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.177 11:27:46 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.177 11:27:46 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:17.177 11:27:46 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:17.177 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:17.177 11:27:46 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:17.177 11:27:46 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:17.435 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:17.435 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:17.435 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:17.435 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:17.435 11:27:46 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:17.435 11:27:46 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:17.435 11:27:46 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.435 11:27:46 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:17.435 11:27:46 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:17.693 11:27:46 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.693 11:27:46 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.693 11:27:46 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:17.693 11:27:46 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:17.693 11:27:46 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.694 11:27:46 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.694 11:27:46 -- setup/devices.sh@53 -- # local found=0 00:04:17.694 11:27:46 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:17.694 11:27:46 -- setup/devices.sh@56 -- # : 00:04:17.694 11:27:46 -- setup/devices.sh@59 -- # local pci status 00:04:17.694 11:27:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.694 11:27:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:17.694 11:27:46 -- setup/devices.sh@47 -- # setup output config 00:04:17.694 11:27:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.694 11:27:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:21.045 11:27:50 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.045 11:27:50 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:21.045 11:27:50 -- setup/devices.sh@63 -- # found=1 00:04:21.045 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.303 11:27:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.303 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.303 11:27:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.303 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.303 11:27:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.303 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.303 11:27:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.303 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.303 11:27:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.303 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.303 11:27:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.303 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.303 11:27:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.303 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.303 11:27:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.303 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.303 11:27:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.303 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.303 11:27:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.303 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.303 11:27:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.303 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.303 11:27:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.303 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.303 11:27:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.303 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.303 11:27:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.303 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.303 11:27:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.303 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.303 11:27:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:21.303 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.303 11:27:50 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.303 11:27:50 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:21.303 11:27:50 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.303 11:27:50 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.303 11:27:50 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.303 11:27:50 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.560 11:27:50 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:21.560 11:27:50 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:21.560 11:27:50 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:21.560 11:27:50 -- setup/devices.sh@50 -- # local mount_point= 00:04:21.560 11:27:50 -- setup/devices.sh@51 -- # local test_file= 00:04:21.560 11:27:50 -- setup/devices.sh@53 -- # local found=0 00:04:21.560 11:27:50 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:21.560 11:27:50 -- setup/devices.sh@59 -- # local pci status 00:04:21.560 11:27:50 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:21.560 11:27:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.560 11:27:50 -- setup/devices.sh@47 -- # setup output config 00:04:21.560 11:27:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.560 11:27:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:25.746 11:27:54 -- setup/devices.sh@63 -- # found=1 00:04:25.746 11:27:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.746 11:27:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.746 11:27:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.746 11:27:54 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:25.746 11:27:54 -- setup/devices.sh@68 -- # return 0 00:04:25.746 11:27:54 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:25.746 11:27:54 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.746 11:27:54 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.746 11:27:54 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:25.746 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:25.746 00:04:25.746 real 0m14.736s 00:04:25.746 user 0m4.393s 00:04:25.746 sys 0m8.226s 00:04:25.746 11:27:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.746 11:27:54 -- common/autotest_common.sh@10 -- # set +x 00:04:25.746 ************************************ 00:04:25.746 END TEST nvme_mount 00:04:25.746 ************************************ 00:04:25.746 11:27:54 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:25.746 11:27:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.746 11:27:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.747 11:27:54 -- common/autotest_common.sh@10 -- # set +x 00:04:25.747 ************************************ 00:04:25.747 START TEST dm_mount 00:04:25.747 ************************************ 00:04:25.747 11:27:54 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:25.747 11:27:54 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:25.747 11:27:54 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:25.747 11:27:54 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:25.747 11:27:54 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:25.747 11:27:54 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:25.747 11:27:54 -- setup/common.sh@40 -- # local part_no=2 00:04:25.747 11:27:54 -- setup/common.sh@41 -- # local size=1073741824 00:04:25.747 11:27:54 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:25.747 11:27:54 -- setup/common.sh@44 -- # parts=() 00:04:25.747 11:27:54 -- setup/common.sh@44 -- # local parts 00:04:25.747 11:27:54 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:25.747 11:27:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.747 11:27:54 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.747 11:27:54 -- setup/common.sh@46 -- # (( part++ )) 00:04:25.747 11:27:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.747 11:27:54 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.747 11:27:54 -- setup/common.sh@46 -- # (( part++ )) 00:04:25.747 11:27:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.747 11:27:54 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:25.747 11:27:54 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:25.747 11:27:54 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:26.681 Creating new GPT entries in memory. 00:04:26.681 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:26.681 other utilities. 00:04:26.681 11:27:55 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:26.681 11:27:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.681 11:27:55 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.681 11:27:55 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.681 11:27:55 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:27.613 Creating new GPT entries in memory. 00:04:27.613 The operation has completed successfully. 00:04:27.613 11:27:57 -- setup/common.sh@57 -- # (( part++ )) 00:04:27.613 11:27:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.613 11:27:57 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.613 11:27:57 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.613 11:27:57 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:28.985 The operation has completed successfully. 00:04:28.985 11:27:58 -- setup/common.sh@57 -- # (( part++ )) 00:04:28.985 11:27:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.985 11:27:58 -- setup/common.sh@62 -- # wait 2143429 00:04:28.985 11:27:58 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:28.985 11:27:58 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:28.985 11:27:58 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:28.985 11:27:58 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:28.985 11:27:58 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:28.985 11:27:58 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:28.985 11:27:58 -- setup/devices.sh@161 -- # break 00:04:28.985 11:27:58 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:28.985 11:27:58 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:28.985 11:27:58 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:28.985 11:27:58 -- setup/devices.sh@166 -- # dm=dm-2 00:04:28.985 11:27:58 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:28.985 11:27:58 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:28.985 11:27:58 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:28.985 11:27:58 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:28.985 11:27:58 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:28.985 11:27:58 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:28.985 11:27:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:28.985 11:27:58 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:28.985 11:27:58 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:28.985 11:27:58 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:28.985 11:27:58 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:28.985 11:27:58 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:28.985 11:27:58 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:28.985 11:27:58 -- setup/devices.sh@53 -- # local found=0 00:04:28.985 11:27:58 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:28.985 11:27:58 -- setup/devices.sh@56 -- # : 00:04:28.985 11:27:58 -- setup/devices.sh@59 -- # local pci status 00:04:28.985 11:27:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.985 11:27:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:28.985 11:27:58 -- setup/devices.sh@47 -- # setup output config 00:04:28.985 11:27:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.986 11:27:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:33.163 11:28:01 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.163 11:28:01 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:33.163 11:28:01 -- setup/devices.sh@63 -- # found=1 00:04:33.163 11:28:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.163 11:28:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.163 11:28:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.163 11:28:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.163 11:28:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.163 11:28:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.163 11:28:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.163 11:28:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.163 11:28:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.163 11:28:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.163 11:28:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.163 11:28:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.163 11:28:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.163 11:28:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.163 11:28:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.163 11:28:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.163 11:28:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.163 11:28:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.163 11:28:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.163 11:28:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.163 11:28:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.163 11:28:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.163 11:28:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.163 11:28:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.163 11:28:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.164 11:28:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.164 11:28:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.164 11:28:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.164 11:28:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.164 11:28:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.164 11:28:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.164 11:28:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.164 11:28:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.164 11:28:02 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.164 11:28:02 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:33.164 11:28:02 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:33.164 11:28:02 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:33.164 11:28:02 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:33.164 11:28:02 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:33.164 11:28:02 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:33.164 11:28:02 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:33.164 11:28:02 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:33.164 11:28:02 -- setup/devices.sh@50 -- # local mount_point= 00:04:33.164 11:28:02 -- setup/devices.sh@51 -- # local test_file= 00:04:33.164 11:28:02 -- setup/devices.sh@53 -- # local found=0 00:04:33.164 11:28:02 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:33.164 11:28:02 -- setup/devices.sh@59 -- # local pci status 00:04:33.164 11:28:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.164 11:28:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:33.164 11:28:02 -- setup/devices.sh@47 -- # setup output config 00:04:33.164 11:28:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.164 11:28:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:37.342 11:28:05 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.342 11:28:05 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:37.342 11:28:05 -- setup/devices.sh@63 -- # found=1 00:04:37.342 11:28:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.342 11:28:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.342 11:28:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.342 11:28:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.342 11:28:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.342 11:28:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.342 11:28:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.342 11:28:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.342 11:28:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.342 11:28:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.342 11:28:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.342 11:28:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.342 11:28:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.342 11:28:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.342 11:28:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.342 11:28:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.343 11:28:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.343 11:28:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.343 11:28:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.343 11:28:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.343 11:28:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.343 11:28:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.343 11:28:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.343 11:28:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.343 11:28:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.343 11:28:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.343 11:28:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.343 11:28:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.343 11:28:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.343 11:28:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.343 11:28:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.343 11:28:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.343 11:28:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.343 11:28:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.343 11:28:06 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:37.343 11:28:06 -- setup/devices.sh@68 -- # return 0 00:04:37.343 11:28:06 -- setup/devices.sh@187 -- # cleanup_dm 00:04:37.343 11:28:06 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:37.343 11:28:06 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:37.343 11:28:06 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:37.343 11:28:06 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.343 11:28:06 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:37.343 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.343 11:28:06 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:37.343 11:28:06 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:37.343 00:04:37.343 real 0m11.306s 00:04:37.343 user 0m2.968s 00:04:37.343 sys 0m5.474s 00:04:37.343 11:28:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.343 11:28:06 -- common/autotest_common.sh@10 -- # set +x 00:04:37.343 ************************************ 00:04:37.343 END TEST dm_mount 00:04:37.343 ************************************ 00:04:37.343 11:28:06 -- setup/devices.sh@1 -- # cleanup 00:04:37.343 11:28:06 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:37.343 11:28:06 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.343 11:28:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.343 11:28:06 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:37.343 11:28:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.343 11:28:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.343 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:37.343 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:37.343 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:37.343 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:37.343 11:28:06 -- setup/devices.sh@12 -- # cleanup_dm 00:04:37.343 11:28:06 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:37.343 11:28:06 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:37.343 11:28:06 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.343 11:28:06 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:37.343 11:28:06 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.343 11:28:06 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:37.343 00:04:37.343 real 0m30.941s 00:04:37.343 user 0m8.922s 00:04:37.343 sys 0m16.917s 00:04:37.343 11:28:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.343 11:28:06 -- common/autotest_common.sh@10 -- # set +x 00:04:37.343 ************************************ 00:04:37.343 END TEST devices 00:04:37.343 ************************************ 00:04:37.343 00:04:37.343 real 1m50.029s 00:04:37.343 user 0m34.053s 00:04:37.343 sys 1m3.714s 00:04:37.343 11:28:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.343 11:28:06 -- common/autotest_common.sh@10 -- # set +x 00:04:37.343 ************************************ 00:04:37.343 END TEST setup.sh 00:04:37.343 ************************************ 00:04:37.343 11:28:06 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:41.527 Hugepages 00:04:41.527 node hugesize free / total 00:04:41.527 node0 1048576kB 0 / 0 00:04:41.527 node0 2048kB 2048 / 2048 00:04:41.527 node1 1048576kB 0 / 0 00:04:41.527 node1 2048kB 0 / 0 00:04:41.527 00:04:41.527 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:41.527 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:41.527 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:41.527 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:41.527 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:41.527 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:41.527 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:41.527 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:41.527 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:41.527 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:41.527 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:41.527 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:41.527 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:41.527 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:41.527 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:41.527 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:41.527 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:41.527 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:41.527 11:28:10 -- spdk/autotest.sh@141 -- # uname -s 00:04:41.527 11:28:10 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:41.527 11:28:10 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:41.527 11:28:10 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:45.739 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:45.739 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:45.739 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:45.739 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:45.739 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:45.739 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:45.739 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:45.739 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:45.739 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:45.739 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:45.739 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:45.739 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:45.739 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:45.739 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:45.739 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:45.739 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:47.111 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:47.367 11:28:16 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:48.311 11:28:17 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:48.311 11:28:17 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:48.311 11:28:17 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:48.311 11:28:17 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:48.311 11:28:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:48.311 11:28:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:48.311 11:28:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:48.311 11:28:17 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:48.311 11:28:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:48.311 11:28:17 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:48.311 11:28:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:04:48.311 11:28:17 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.492 Waiting for block devices as requested 00:04:52.492 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:52.492 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:52.492 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:52.492 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:52.492 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:52.492 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:52.492 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:52.492 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:52.750 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:52.750 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:52.750 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:53.008 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:53.008 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:53.008 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:53.267 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:53.267 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:53.267 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:53.525 11:28:22 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:53.525 11:28:22 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:53.525 11:28:22 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:53.525 11:28:22 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:04:53.525 11:28:22 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:53.525 11:28:22 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:53.525 11:28:22 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:53.525 11:28:22 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:53.525 11:28:22 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:53.525 11:28:22 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:53.525 11:28:22 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:53.525 11:28:22 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:53.525 11:28:22 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:53.525 11:28:22 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:04:53.525 11:28:22 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:53.525 11:28:22 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:53.525 11:28:22 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:53.525 11:28:22 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:53.525 11:28:22 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:53.525 11:28:22 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:53.525 11:28:22 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:53.525 11:28:22 -- common/autotest_common.sh@1542 -- # continue 00:04:53.525 11:28:22 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:53.525 11:28:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:53.525 11:28:22 -- common/autotest_common.sh@10 -- # set +x 00:04:53.525 11:28:22 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:53.525 11:28:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:53.525 11:28:22 -- common/autotest_common.sh@10 -- # set +x 00:04:53.525 11:28:22 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:57.701 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.701 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.701 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.701 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.701 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.701 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.701 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.701 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.701 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.701 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.701 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.701 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.701 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.701 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.701 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.701 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:59.600 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:59.858 11:28:29 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:59.858 11:28:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:59.858 11:28:29 -- common/autotest_common.sh@10 -- # set +x 00:04:59.858 11:28:29 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:59.858 11:28:29 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:59.858 11:28:29 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:59.858 11:28:29 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:59.858 11:28:29 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:59.858 11:28:29 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:59.858 11:28:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:59.858 11:28:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:59.858 11:28:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:59.858 11:28:29 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:59.858 11:28:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:59.858 11:28:29 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:59.858 11:28:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:04:59.858 11:28:29 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:59.858 11:28:29 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:59.858 11:28:29 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:04:59.858 11:28:29 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:59.858 11:28:29 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:04:59.858 11:28:29 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:d8:00.0 00:04:59.858 11:28:29 -- common/autotest_common.sh@1577 -- # [[ -z 0000:d8:00.0 ]] 00:04:59.858 11:28:29 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2154861 00:04:59.858 11:28:29 -- common/autotest_common.sh@1583 -- # waitforlisten 2154861 00:04:59.858 11:28:29 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.858 11:28:29 -- common/autotest_common.sh@819 -- # '[' -z 2154861 ']' 00:04:59.858 11:28:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.858 11:28:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:59.858 11:28:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.858 11:28:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:59.858 11:28:29 -- common/autotest_common.sh@10 -- # set +x 00:05:00.116 [2024-07-21 11:28:29.319834] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:00.116 [2024-07-21 11:28:29.319884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2154861 ] 00:05:00.116 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.116 [2024-07-21 11:28:29.405338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.116 [2024-07-21 11:28:29.442199] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:00.116 [2024-07-21 11:28:29.442321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.683 11:28:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:00.684 11:28:30 -- common/autotest_common.sh@852 -- # return 0 00:05:00.684 11:28:30 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:00.976 11:28:30 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:00.976 11:28:30 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:04.259 nvme0n1 00:05:04.259 11:28:33 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:04.259 [2024-07-21 11:28:33.279842] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:04.259 request: 00:05:04.259 { 00:05:04.259 "nvme_ctrlr_name": "nvme0", 00:05:04.259 "password": "test", 00:05:04.259 "method": "bdev_nvme_opal_revert", 00:05:04.259 "req_id": 1 00:05:04.259 } 00:05:04.259 Got JSON-RPC error response 00:05:04.259 response: 00:05:04.259 { 00:05:04.259 "code": -32602, 00:05:04.259 "message": "Invalid parameters" 00:05:04.259 } 00:05:04.259 11:28:33 -- common/autotest_common.sh@1589 -- # true 00:05:04.259 11:28:33 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:04.259 11:28:33 -- common/autotest_common.sh@1593 -- # killprocess 2154861 00:05:04.259 11:28:33 -- common/autotest_common.sh@926 -- # '[' -z 2154861 ']' 00:05:04.259 11:28:33 -- common/autotest_common.sh@930 -- # kill -0 2154861 00:05:04.259 11:28:33 -- common/autotest_common.sh@931 -- # uname 00:05:04.259 11:28:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:04.259 11:28:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2154861 00:05:04.259 11:28:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:04.259 11:28:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:04.259 11:28:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2154861' 00:05:04.259 killing process with pid 2154861 00:05:04.259 11:28:33 -- common/autotest_common.sh@945 -- # kill 2154861 00:05:04.259 11:28:33 -- common/autotest_common.sh@950 -- # wait 2154861 00:05:06.814 11:28:35 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:06.814 11:28:35 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:06.814 11:28:35 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:06.814 11:28:35 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:06.814 11:28:35 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:06.814 11:28:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:06.814 11:28:35 -- common/autotest_common.sh@10 -- # set +x 00:05:06.814 11:28:35 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:06.814 11:28:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.814 11:28:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.814 11:28:35 -- common/autotest_common.sh@10 -- # set +x 00:05:06.814 ************************************ 00:05:06.814 START TEST env 00:05:06.814 ************************************ 00:05:06.814 11:28:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:06.814 * Looking for test storage... 00:05:06.814 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:06.814 11:28:36 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:06.814 11:28:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.814 11:28:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.814 11:28:36 -- common/autotest_common.sh@10 -- # set +x 00:05:06.814 ************************************ 00:05:06.814 START TEST env_memory 00:05:06.814 ************************************ 00:05:06.814 11:28:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:06.814 00:05:06.814 00:05:06.814 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.814 http://cunit.sourceforge.net/ 00:05:06.814 00:05:06.814 00:05:06.814 Suite: memory 00:05:06.814 Test: alloc and free memory map ...[2024-07-21 11:28:36.100742] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:06.814 passed 00:05:06.814 Test: mem map translation ...[2024-07-21 11:28:36.119591] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:06.814 [2024-07-21 11:28:36.119607] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:06.814 [2024-07-21 11:28:36.119650] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:06.814 [2024-07-21 11:28:36.119660] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:06.814 passed 00:05:06.814 Test: mem map registration ...[2024-07-21 11:28:36.156464] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:06.814 [2024-07-21 11:28:36.156481] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:06.814 passed 00:05:06.814 Test: mem map adjacent registrations ...passed 00:05:06.814 00:05:06.814 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.814 suites 1 1 n/a 0 0 00:05:06.814 tests 4 4 4 0 0 00:05:06.814 asserts 152 152 152 0 n/a 00:05:06.814 00:05:06.814 Elapsed time = 0.136 seconds 00:05:06.814 00:05:06.814 real 0m0.150s 00:05:06.814 user 0m0.142s 00:05:06.814 sys 0m0.007s 00:05:06.814 11:28:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.814 11:28:36 -- common/autotest_common.sh@10 -- # set +x 00:05:06.814 ************************************ 00:05:06.814 END TEST env_memory 00:05:06.814 ************************************ 00:05:07.073 11:28:36 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:07.073 11:28:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.073 11:28:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.073 11:28:36 -- common/autotest_common.sh@10 -- # set +x 00:05:07.073 ************************************ 00:05:07.073 START TEST env_vtophys 00:05:07.073 ************************************ 00:05:07.073 11:28:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:07.073 EAL: lib.eal log level changed from notice to debug 00:05:07.073 EAL: Detected lcore 0 as core 0 on socket 0 00:05:07.073 EAL: Detected lcore 1 as core 1 on socket 0 00:05:07.073 EAL: Detected lcore 2 as core 2 on socket 0 00:05:07.073 EAL: Detected lcore 3 as core 3 on socket 0 00:05:07.073 EAL: Detected lcore 4 as core 4 on socket 0 00:05:07.073 EAL: Detected lcore 5 as core 5 on socket 0 00:05:07.073 EAL: Detected lcore 6 as core 6 on socket 0 00:05:07.073 EAL: Detected lcore 7 as core 8 on socket 0 00:05:07.073 EAL: Detected lcore 8 as core 9 on socket 0 00:05:07.073 EAL: Detected lcore 9 as core 10 on socket 0 00:05:07.073 EAL: Detected lcore 10 as core 11 on socket 0 00:05:07.073 EAL: Detected lcore 11 as core 12 on socket 0 00:05:07.073 EAL: Detected lcore 12 as core 13 on socket 0 00:05:07.073 EAL: Detected lcore 13 as core 14 on socket 0 00:05:07.073 EAL: Detected lcore 14 as core 16 on socket 0 00:05:07.073 EAL: Detected lcore 15 as core 17 on socket 0 00:05:07.073 EAL: Detected lcore 16 as core 18 on socket 0 00:05:07.073 EAL: Detected lcore 17 as core 19 on socket 0 00:05:07.073 EAL: Detected lcore 18 as core 20 on socket 0 00:05:07.073 EAL: Detected lcore 19 as core 21 on socket 0 00:05:07.073 EAL: Detected lcore 20 as core 22 on socket 0 00:05:07.073 EAL: Detected lcore 21 as core 24 on socket 0 00:05:07.073 EAL: Detected lcore 22 as core 25 on socket 0 00:05:07.073 EAL: Detected lcore 23 as core 26 on socket 0 00:05:07.073 EAL: Detected lcore 24 as core 27 on socket 0 00:05:07.073 EAL: Detected lcore 25 as core 28 on socket 0 00:05:07.073 EAL: Detected lcore 26 as core 29 on socket 0 00:05:07.073 EAL: Detected lcore 27 as core 30 on socket 0 00:05:07.073 EAL: Detected lcore 28 as core 0 on socket 1 00:05:07.073 EAL: Detected lcore 29 as core 1 on socket 1 00:05:07.073 EAL: Detected lcore 30 as core 2 on socket 1 00:05:07.073 EAL: Detected lcore 31 as core 3 on socket 1 00:05:07.073 EAL: Detected lcore 32 as core 4 on socket 1 00:05:07.073 EAL: Detected lcore 33 as core 5 on socket 1 00:05:07.073 EAL: Detected lcore 34 as core 6 on socket 1 00:05:07.073 EAL: Detected lcore 35 as core 8 on socket 1 00:05:07.073 EAL: Detected lcore 36 as core 9 on socket 1 00:05:07.073 EAL: Detected lcore 37 as core 10 on socket 1 00:05:07.073 EAL: Detected lcore 38 as core 11 on socket 1 00:05:07.073 EAL: Detected lcore 39 as core 12 on socket 1 00:05:07.073 EAL: Detected lcore 40 as core 13 on socket 1 00:05:07.073 EAL: Detected lcore 41 as core 14 on socket 1 00:05:07.073 EAL: Detected lcore 42 as core 16 on socket 1 00:05:07.073 EAL: Detected lcore 43 as core 17 on socket 1 00:05:07.073 EAL: Detected lcore 44 as core 18 on socket 1 00:05:07.073 EAL: Detected lcore 45 as core 19 on socket 1 00:05:07.073 EAL: Detected lcore 46 as core 20 on socket 1 00:05:07.073 EAL: Detected lcore 47 as core 21 on socket 1 00:05:07.073 EAL: Detected lcore 48 as core 22 on socket 1 00:05:07.073 EAL: Detected lcore 49 as core 24 on socket 1 00:05:07.073 EAL: Detected lcore 50 as core 25 on socket 1 00:05:07.073 EAL: Detected lcore 51 as core 26 on socket 1 00:05:07.073 EAL: Detected lcore 52 as core 27 on socket 1 00:05:07.073 EAL: Detected lcore 53 as core 28 on socket 1 00:05:07.073 EAL: Detected lcore 54 as core 29 on socket 1 00:05:07.073 EAL: Detected lcore 55 as core 30 on socket 1 00:05:07.073 EAL: Detected lcore 56 as core 0 on socket 0 00:05:07.073 EAL: Detected lcore 57 as core 1 on socket 0 00:05:07.073 EAL: Detected lcore 58 as core 2 on socket 0 00:05:07.073 EAL: Detected lcore 59 as core 3 on socket 0 00:05:07.073 EAL: Detected lcore 60 as core 4 on socket 0 00:05:07.073 EAL: Detected lcore 61 as core 5 on socket 0 00:05:07.073 EAL: Detected lcore 62 as core 6 on socket 0 00:05:07.073 EAL: Detected lcore 63 as core 8 on socket 0 00:05:07.073 EAL: Detected lcore 64 as core 9 on socket 0 00:05:07.073 EAL: Detected lcore 65 as core 10 on socket 0 00:05:07.073 EAL: Detected lcore 66 as core 11 on socket 0 00:05:07.073 EAL: Detected lcore 67 as core 12 on socket 0 00:05:07.073 EAL: Detected lcore 68 as core 13 on socket 0 00:05:07.073 EAL: Detected lcore 69 as core 14 on socket 0 00:05:07.073 EAL: Detected lcore 70 as core 16 on socket 0 00:05:07.073 EAL: Detected lcore 71 as core 17 on socket 0 00:05:07.073 EAL: Detected lcore 72 as core 18 on socket 0 00:05:07.073 EAL: Detected lcore 73 as core 19 on socket 0 00:05:07.073 EAL: Detected lcore 74 as core 20 on socket 0 00:05:07.073 EAL: Detected lcore 75 as core 21 on socket 0 00:05:07.073 EAL: Detected lcore 76 as core 22 on socket 0 00:05:07.073 EAL: Detected lcore 77 as core 24 on socket 0 00:05:07.073 EAL: Detected lcore 78 as core 25 on socket 0 00:05:07.073 EAL: Detected lcore 79 as core 26 on socket 0 00:05:07.073 EAL: Detected lcore 80 as core 27 on socket 0 00:05:07.073 EAL: Detected lcore 81 as core 28 on socket 0 00:05:07.073 EAL: Detected lcore 82 as core 29 on socket 0 00:05:07.073 EAL: Detected lcore 83 as core 30 on socket 0 00:05:07.073 EAL: Detected lcore 84 as core 0 on socket 1 00:05:07.073 EAL: Detected lcore 85 as core 1 on socket 1 00:05:07.073 EAL: Detected lcore 86 as core 2 on socket 1 00:05:07.073 EAL: Detected lcore 87 as core 3 on socket 1 00:05:07.073 EAL: Detected lcore 88 as core 4 on socket 1 00:05:07.073 EAL: Detected lcore 89 as core 5 on socket 1 00:05:07.073 EAL: Detected lcore 90 as core 6 on socket 1 00:05:07.073 EAL: Detected lcore 91 as core 8 on socket 1 00:05:07.073 EAL: Detected lcore 92 as core 9 on socket 1 00:05:07.073 EAL: Detected lcore 93 as core 10 on socket 1 00:05:07.073 EAL: Detected lcore 94 as core 11 on socket 1 00:05:07.073 EAL: Detected lcore 95 as core 12 on socket 1 00:05:07.073 EAL: Detected lcore 96 as core 13 on socket 1 00:05:07.073 EAL: Detected lcore 97 as core 14 on socket 1 00:05:07.073 EAL: Detected lcore 98 as core 16 on socket 1 00:05:07.073 EAL: Detected lcore 99 as core 17 on socket 1 00:05:07.073 EAL: Detected lcore 100 as core 18 on socket 1 00:05:07.073 EAL: Detected lcore 101 as core 19 on socket 1 00:05:07.073 EAL: Detected lcore 102 as core 20 on socket 1 00:05:07.073 EAL: Detected lcore 103 as core 21 on socket 1 00:05:07.073 EAL: Detected lcore 104 as core 22 on socket 1 00:05:07.073 EAL: Detected lcore 105 as core 24 on socket 1 00:05:07.073 EAL: Detected lcore 106 as core 25 on socket 1 00:05:07.073 EAL: Detected lcore 107 as core 26 on socket 1 00:05:07.073 EAL: Detected lcore 108 as core 27 on socket 1 00:05:07.073 EAL: Detected lcore 109 as core 28 on socket 1 00:05:07.073 EAL: Detected lcore 110 as core 29 on socket 1 00:05:07.073 EAL: Detected lcore 111 as core 30 on socket 1 00:05:07.073 EAL: Maximum logical cores by configuration: 128 00:05:07.073 EAL: Detected CPU lcores: 112 00:05:07.073 EAL: Detected NUMA nodes: 2 00:05:07.074 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:07.074 EAL: Detected shared linkage of DPDK 00:05:07.074 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:07.074 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:07.074 EAL: Registered [vdev] bus. 00:05:07.074 EAL: bus.vdev log level changed from disabled to notice 00:05:07.074 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:07.074 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:07.074 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:07.074 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:07.074 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:07.074 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:07.074 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:07.074 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:07.074 EAL: No shared files mode enabled, IPC will be disabled 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: Bus pci wants IOVA as 'DC' 00:05:07.074 EAL: Bus vdev wants IOVA as 'DC' 00:05:07.074 EAL: Buses did not request a specific IOVA mode. 00:05:07.074 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:07.074 EAL: Selected IOVA mode 'VA' 00:05:07.074 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.074 EAL: Probing VFIO support... 00:05:07.074 EAL: IOMMU type 1 (Type 1) is supported 00:05:07.074 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:07.074 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:07.074 EAL: VFIO support initialized 00:05:07.074 EAL: Ask a virtual area of 0x2e000 bytes 00:05:07.074 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:07.074 EAL: Setting up physically contiguous memory... 00:05:07.074 EAL: Setting maximum number of open files to 524288 00:05:07.074 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:07.074 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:07.074 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:07.074 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.074 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:07.074 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:07.074 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.074 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:07.074 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:07.074 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.074 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:07.074 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:07.074 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.074 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:07.074 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:07.074 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.074 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:07.074 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:07.074 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.074 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:07.074 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:07.074 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.074 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:07.074 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:07.074 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.074 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:07.074 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:07.074 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:07.074 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.074 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:07.074 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:07.074 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.074 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:07.074 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:07.074 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.074 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:07.074 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:07.074 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.074 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:07.074 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:07.074 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.074 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:07.074 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:07.074 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.074 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:07.074 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:07.074 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.074 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:07.074 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:07.074 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.074 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:07.074 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:07.074 EAL: Hugepages will be freed exactly as allocated. 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: TSC frequency is ~2500000 KHz 00:05:07.074 EAL: Main lcore 0 is ready (tid=7fd30b37ea00;cpuset=[0]) 00:05:07.074 EAL: Trying to obtain current memory policy. 00:05:07.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.074 EAL: Restoring previous memory policy: 0 00:05:07.074 EAL: request: mp_malloc_sync 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: Heap on socket 0 was expanded by 2MB 00:05:07.074 EAL: PCI device 0000:41:00.0 on NUMA socket 0 00:05:07.074 EAL: probe driver: 8086:37d2 net_i40e 00:05:07.074 EAL: Not managed by a supported kernel driver, skipped 00:05:07.074 EAL: PCI device 0000:41:00.1 on NUMA socket 0 00:05:07.074 EAL: probe driver: 8086:37d2 net_i40e 00:05:07.074 EAL: Not managed by a supported kernel driver, skipped 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:07.074 EAL: Mem event callback 'spdk:(nil)' registered 00:05:07.074 00:05:07.074 00:05:07.074 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.074 http://cunit.sourceforge.net/ 00:05:07.074 00:05:07.074 00:05:07.074 Suite: components_suite 00:05:07.074 Test: vtophys_malloc_test ...passed 00:05:07.074 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:07.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.074 EAL: Restoring previous memory policy: 4 00:05:07.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.074 EAL: request: mp_malloc_sync 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: Heap on socket 0 was expanded by 4MB 00:05:07.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.074 EAL: request: mp_malloc_sync 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: Heap on socket 0 was shrunk by 4MB 00:05:07.074 EAL: Trying to obtain current memory policy. 00:05:07.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.074 EAL: Restoring previous memory policy: 4 00:05:07.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.074 EAL: request: mp_malloc_sync 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: Heap on socket 0 was expanded by 6MB 00:05:07.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.074 EAL: request: mp_malloc_sync 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: Heap on socket 0 was shrunk by 6MB 00:05:07.074 EAL: Trying to obtain current memory policy. 00:05:07.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.074 EAL: Restoring previous memory policy: 4 00:05:07.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.074 EAL: request: mp_malloc_sync 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: Heap on socket 0 was expanded by 10MB 00:05:07.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.074 EAL: request: mp_malloc_sync 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: Heap on socket 0 was shrunk by 10MB 00:05:07.074 EAL: Trying to obtain current memory policy. 00:05:07.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.074 EAL: Restoring previous memory policy: 4 00:05:07.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.074 EAL: request: mp_malloc_sync 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: Heap on socket 0 was expanded by 18MB 00:05:07.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.074 EAL: request: mp_malloc_sync 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: Heap on socket 0 was shrunk by 18MB 00:05:07.074 EAL: Trying to obtain current memory policy. 00:05:07.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.074 EAL: Restoring previous memory policy: 4 00:05:07.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.074 EAL: request: mp_malloc_sync 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: Heap on socket 0 was expanded by 34MB 00:05:07.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.074 EAL: request: mp_malloc_sync 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: Heap on socket 0 was shrunk by 34MB 00:05:07.074 EAL: Trying to obtain current memory policy. 00:05:07.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.074 EAL: Restoring previous memory policy: 4 00:05:07.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.074 EAL: request: mp_malloc_sync 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: Heap on socket 0 was expanded by 66MB 00:05:07.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.074 EAL: request: mp_malloc_sync 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: Heap on socket 0 was shrunk by 66MB 00:05:07.074 EAL: Trying to obtain current memory policy. 00:05:07.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.074 EAL: Restoring previous memory policy: 4 00:05:07.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.074 EAL: request: mp_malloc_sync 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: Heap on socket 0 was expanded by 130MB 00:05:07.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.074 EAL: request: mp_malloc_sync 00:05:07.074 EAL: No shared files mode enabled, IPC is disabled 00:05:07.074 EAL: Heap on socket 0 was shrunk by 130MB 00:05:07.074 EAL: Trying to obtain current memory policy. 00:05:07.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.333 EAL: Restoring previous memory policy: 4 00:05:07.333 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.333 EAL: request: mp_malloc_sync 00:05:07.333 EAL: No shared files mode enabled, IPC is disabled 00:05:07.333 EAL: Heap on socket 0 was expanded by 258MB 00:05:07.333 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.333 EAL: request: mp_malloc_sync 00:05:07.333 EAL: No shared files mode enabled, IPC is disabled 00:05:07.333 EAL: Heap on socket 0 was shrunk by 258MB 00:05:07.333 EAL: Trying to obtain current memory policy. 00:05:07.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.333 EAL: Restoring previous memory policy: 4 00:05:07.333 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.333 EAL: request: mp_malloc_sync 00:05:07.333 EAL: No shared files mode enabled, IPC is disabled 00:05:07.333 EAL: Heap on socket 0 was expanded by 514MB 00:05:07.591 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.591 EAL: request: mp_malloc_sync 00:05:07.591 EAL: No shared files mode enabled, IPC is disabled 00:05:07.591 EAL: Heap on socket 0 was shrunk by 514MB 00:05:07.591 EAL: Trying to obtain current memory policy. 00:05:07.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.850 EAL: Restoring previous memory policy: 4 00:05:07.850 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.850 EAL: request: mp_malloc_sync 00:05:07.850 EAL: No shared files mode enabled, IPC is disabled 00:05:07.850 EAL: Heap on socket 0 was expanded by 1026MB 00:05:07.850 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.109 EAL: request: mp_malloc_sync 00:05:08.109 EAL: No shared files mode enabled, IPC is disabled 00:05:08.109 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:08.109 passed 00:05:08.109 00:05:08.109 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.109 suites 1 1 n/a 0 0 00:05:08.109 tests 2 2 2 0 0 00:05:08.109 asserts 497 497 497 0 n/a 00:05:08.109 00:05:08.109 Elapsed time = 0.962 seconds 00:05:08.109 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.109 EAL: request: mp_malloc_sync 00:05:08.109 EAL: No shared files mode enabled, IPC is disabled 00:05:08.109 EAL: Heap on socket 0 was shrunk by 2MB 00:05:08.109 EAL: No shared files mode enabled, IPC is disabled 00:05:08.109 EAL: No shared files mode enabled, IPC is disabled 00:05:08.109 EAL: No shared files mode enabled, IPC is disabled 00:05:08.109 00:05:08.109 real 0m1.109s 00:05:08.109 user 0m0.624s 00:05:08.109 sys 0m0.454s 00:05:08.109 11:28:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.109 11:28:37 -- common/autotest_common.sh@10 -- # set +x 00:05:08.109 ************************************ 00:05:08.109 END TEST env_vtophys 00:05:08.109 ************************************ 00:05:08.109 11:28:37 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:08.109 11:28:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:08.109 11:28:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:08.109 11:28:37 -- common/autotest_common.sh@10 -- # set +x 00:05:08.109 ************************************ 00:05:08.109 START TEST env_pci 00:05:08.109 ************************************ 00:05:08.109 11:28:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:08.109 00:05:08.109 00:05:08.109 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.109 http://cunit.sourceforge.net/ 00:05:08.109 00:05:08.109 00:05:08.109 Suite: pci 00:05:08.109 Test: pci_hook ...[2024-07-21 11:28:37.432045] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2157712 has claimed it 00:05:08.109 EAL: Cannot find device (10000:00:01.0) 00:05:08.109 EAL: Failed to attach device on primary process 00:05:08.109 passed 00:05:08.109 00:05:08.109 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.109 suites 1 1 n/a 0 0 00:05:08.109 tests 1 1 1 0 0 00:05:08.109 asserts 25 25 25 0 n/a 00:05:08.109 00:05:08.109 Elapsed time = 0.043 seconds 00:05:08.109 00:05:08.109 real 0m0.066s 00:05:08.109 user 0m0.014s 00:05:08.109 sys 0m0.051s 00:05:08.109 11:28:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.109 11:28:37 -- common/autotest_common.sh@10 -- # set +x 00:05:08.109 ************************************ 00:05:08.109 END TEST env_pci 00:05:08.109 ************************************ 00:05:08.109 11:28:37 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:08.110 11:28:37 -- env/env.sh@15 -- # uname 00:05:08.110 11:28:37 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:08.110 11:28:37 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:08.110 11:28:37 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:08.110 11:28:37 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:08.110 11:28:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:08.110 11:28:37 -- common/autotest_common.sh@10 -- # set +x 00:05:08.110 ************************************ 00:05:08.110 START TEST env_dpdk_post_init 00:05:08.110 ************************************ 00:05:08.110 11:28:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:08.388 EAL: Detected CPU lcores: 112 00:05:08.388 EAL: Detected NUMA nodes: 2 00:05:08.388 EAL: Detected shared linkage of DPDK 00:05:08.388 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.388 EAL: Selected IOVA mode 'VA' 00:05:08.388 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.388 EAL: VFIO support initialized 00:05:08.388 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.388 EAL: Using IOMMU type 1 (Type 1) 00:05:08.388 EAL: Ignore mapping IO port bar(1) 00:05:08.388 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:08.388 EAL: Ignore mapping IO port bar(1) 00:05:08.388 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:08.388 EAL: Ignore mapping IO port bar(1) 00:05:08.388 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:08.388 EAL: Ignore mapping IO port bar(1) 00:05:08.388 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:08.388 EAL: Ignore mapping IO port bar(1) 00:05:08.388 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:08.388 EAL: Ignore mapping IO port bar(1) 00:05:08.388 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:08.388 EAL: Ignore mapping IO port bar(1) 00:05:08.388 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:08.388 EAL: Ignore mapping IO port bar(1) 00:05:08.388 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:08.388 EAL: Ignore mapping IO port bar(1) 00:05:08.388 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:08.388 EAL: Ignore mapping IO port bar(1) 00:05:08.388 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:08.388 EAL: Ignore mapping IO port bar(1) 00:05:08.388 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:08.646 EAL: Ignore mapping IO port bar(1) 00:05:08.646 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:08.646 EAL: Ignore mapping IO port bar(1) 00:05:08.646 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:08.646 EAL: Ignore mapping IO port bar(1) 00:05:08.646 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:08.646 EAL: Ignore mapping IO port bar(1) 00:05:08.646 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:08.646 EAL: Ignore mapping IO port bar(1) 00:05:08.646 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:09.212 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:13.398 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:13.398 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:13.656 Starting DPDK initialization... 00:05:13.656 Starting SPDK post initialization... 00:05:13.656 SPDK NVMe probe 00:05:13.656 Attaching to 0000:d8:00.0 00:05:13.656 Attached to 0000:d8:00.0 00:05:13.656 Cleaning up... 00:05:13.656 00:05:13.656 real 0m5.356s 00:05:13.656 user 0m3.969s 00:05:13.656 sys 0m0.441s 00:05:13.656 11:28:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.656 11:28:42 -- common/autotest_common.sh@10 -- # set +x 00:05:13.656 ************************************ 00:05:13.656 END TEST env_dpdk_post_init 00:05:13.656 ************************************ 00:05:13.656 11:28:42 -- env/env.sh@26 -- # uname 00:05:13.656 11:28:42 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:13.656 11:28:42 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:13.656 11:28:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.656 11:28:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.656 11:28:42 -- common/autotest_common.sh@10 -- # set +x 00:05:13.656 ************************************ 00:05:13.656 START TEST env_mem_callbacks 00:05:13.656 ************************************ 00:05:13.656 11:28:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:13.656 EAL: Detected CPU lcores: 112 00:05:13.656 EAL: Detected NUMA nodes: 2 00:05:13.656 EAL: Detected shared linkage of DPDK 00:05:13.656 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:13.656 EAL: Selected IOVA mode 'VA' 00:05:13.656 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.656 EAL: VFIO support initialized 00:05:13.656 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:13.656 00:05:13.656 00:05:13.656 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.656 http://cunit.sourceforge.net/ 00:05:13.656 00:05:13.656 00:05:13.656 Suite: memory 00:05:13.656 Test: test ... 00:05:13.656 register 0x200000200000 2097152 00:05:13.656 malloc 3145728 00:05:13.656 register 0x200000400000 4194304 00:05:13.656 buf 0x200000500000 len 3145728 PASSED 00:05:13.656 malloc 64 00:05:13.656 buf 0x2000004fff40 len 64 PASSED 00:05:13.656 malloc 4194304 00:05:13.656 register 0x200000800000 6291456 00:05:13.656 buf 0x200000a00000 len 4194304 PASSED 00:05:13.656 free 0x200000500000 3145728 00:05:13.656 free 0x2000004fff40 64 00:05:13.656 unregister 0x200000400000 4194304 PASSED 00:05:13.656 free 0x200000a00000 4194304 00:05:13.656 unregister 0x200000800000 6291456 PASSED 00:05:13.656 malloc 8388608 00:05:13.656 register 0x200000400000 10485760 00:05:13.656 buf 0x200000600000 len 8388608 PASSED 00:05:13.656 free 0x200000600000 8388608 00:05:13.656 unregister 0x200000400000 10485760 PASSED 00:05:13.656 passed 00:05:13.656 00:05:13.656 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.656 suites 1 1 n/a 0 0 00:05:13.656 tests 1 1 1 0 0 00:05:13.656 asserts 15 15 15 0 n/a 00:05:13.656 00:05:13.656 Elapsed time = 0.004 seconds 00:05:13.656 00:05:13.656 real 0m0.073s 00:05:13.656 user 0m0.024s 00:05:13.656 sys 0m0.049s 00:05:13.656 11:28:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.656 11:28:43 -- common/autotest_common.sh@10 -- # set +x 00:05:13.656 ************************************ 00:05:13.656 END TEST env_mem_callbacks 00:05:13.656 ************************************ 00:05:13.656 00:05:13.656 real 0m7.112s 00:05:13.656 user 0m4.907s 00:05:13.656 sys 0m1.278s 00:05:13.656 11:28:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.656 11:28:43 -- common/autotest_common.sh@10 -- # set +x 00:05:13.656 ************************************ 00:05:13.656 END TEST env 00:05:13.656 ************************************ 00:05:13.915 11:28:43 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:13.915 11:28:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.915 11:28:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.915 11:28:43 -- common/autotest_common.sh@10 -- # set +x 00:05:13.915 ************************************ 00:05:13.915 START TEST rpc 00:05:13.915 ************************************ 00:05:13.915 11:28:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:13.915 * Looking for test storage... 00:05:13.915 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:13.915 11:28:43 -- rpc/rpc.sh@65 -- # spdk_pid=2161058 00:05:13.915 11:28:43 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.915 11:28:43 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:13.915 11:28:43 -- rpc/rpc.sh@67 -- # waitforlisten 2161058 00:05:13.915 11:28:43 -- common/autotest_common.sh@819 -- # '[' -z 2161058 ']' 00:05:13.915 11:28:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.915 11:28:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:13.915 11:28:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.915 11:28:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:13.915 11:28:43 -- common/autotest_common.sh@10 -- # set +x 00:05:13.915 [2024-07-21 11:28:43.255814] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:13.915 [2024-07-21 11:28:43.255873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2161058 ] 00:05:13.915 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.174 [2024-07-21 11:28:43.341833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.174 [2024-07-21 11:28:43.378158] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:14.174 [2024-07-21 11:28:43.378284] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:14.174 [2024-07-21 11:28:43.378295] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2161058' to capture a snapshot of events at runtime. 00:05:14.174 [2024-07-21 11:28:43.378304] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2161058 for offline analysis/debug. 00:05:14.174 [2024-07-21 11:28:43.378326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.740 11:28:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:14.740 11:28:44 -- common/autotest_common.sh@852 -- # return 0 00:05:14.740 11:28:44 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:14.740 11:28:44 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:14.740 11:28:44 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:14.740 11:28:44 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:14.740 11:28:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.740 11:28:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.740 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.740 ************************************ 00:05:14.740 START TEST rpc_integrity 00:05:14.740 ************************************ 00:05:14.740 11:28:44 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:14.740 11:28:44 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:14.740 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.740 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.740 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.740 11:28:44 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:14.740 11:28:44 -- rpc/rpc.sh@13 -- # jq length 00:05:14.740 11:28:44 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:14.740 11:28:44 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:14.740 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.740 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.740 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.740 11:28:44 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:14.740 11:28:44 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:14.740 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.740 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.740 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.740 11:28:44 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:14.740 { 00:05:14.740 "name": "Malloc0", 00:05:14.740 "aliases": [ 00:05:14.740 "5efe10e0-293c-4d15-a933-488ce408f35f" 00:05:14.740 ], 00:05:14.740 "product_name": "Malloc disk", 00:05:14.740 "block_size": 512, 00:05:14.740 "num_blocks": 16384, 00:05:14.740 "uuid": "5efe10e0-293c-4d15-a933-488ce408f35f", 00:05:14.740 "assigned_rate_limits": { 00:05:14.740 "rw_ios_per_sec": 0, 00:05:14.740 "rw_mbytes_per_sec": 0, 00:05:14.740 "r_mbytes_per_sec": 0, 00:05:14.740 "w_mbytes_per_sec": 0 00:05:14.740 }, 00:05:14.740 "claimed": false, 00:05:14.740 "zoned": false, 00:05:14.740 "supported_io_types": { 00:05:14.740 "read": true, 00:05:14.740 "write": true, 00:05:14.740 "unmap": true, 00:05:14.740 "write_zeroes": true, 00:05:14.740 "flush": true, 00:05:14.740 "reset": true, 00:05:14.740 "compare": false, 00:05:14.740 "compare_and_write": false, 00:05:14.740 "abort": true, 00:05:14.740 "nvme_admin": false, 00:05:14.740 "nvme_io": false 00:05:14.740 }, 00:05:14.740 "memory_domains": [ 00:05:14.740 { 00:05:14.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.740 "dma_device_type": 2 00:05:14.740 } 00:05:14.740 ], 00:05:14.740 "driver_specific": {} 00:05:14.740 } 00:05:14.740 ]' 00:05:14.740 11:28:44 -- rpc/rpc.sh@17 -- # jq length 00:05:14.997 11:28:44 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:14.997 11:28:44 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:14.997 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.997 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.997 [2024-07-21 11:28:44.190448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:14.997 [2024-07-21 11:28:44.190482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:14.997 [2024-07-21 11:28:44.190494] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19d0e30 00:05:14.997 [2024-07-21 11:28:44.190503] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:14.997 [2024-07-21 11:28:44.191508] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:14.997 [2024-07-21 11:28:44.191530] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:14.997 Passthru0 00:05:14.997 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.997 11:28:44 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:14.997 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.997 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.997 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.997 11:28:44 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:14.997 { 00:05:14.997 "name": "Malloc0", 00:05:14.997 "aliases": [ 00:05:14.997 "5efe10e0-293c-4d15-a933-488ce408f35f" 00:05:14.997 ], 00:05:14.997 "product_name": "Malloc disk", 00:05:14.997 "block_size": 512, 00:05:14.997 "num_blocks": 16384, 00:05:14.997 "uuid": "5efe10e0-293c-4d15-a933-488ce408f35f", 00:05:14.997 "assigned_rate_limits": { 00:05:14.997 "rw_ios_per_sec": 0, 00:05:14.997 "rw_mbytes_per_sec": 0, 00:05:14.997 "r_mbytes_per_sec": 0, 00:05:14.997 "w_mbytes_per_sec": 0 00:05:14.997 }, 00:05:14.997 "claimed": true, 00:05:14.997 "claim_type": "exclusive_write", 00:05:14.997 "zoned": false, 00:05:14.997 "supported_io_types": { 00:05:14.997 "read": true, 00:05:14.997 "write": true, 00:05:14.997 "unmap": true, 00:05:14.997 "write_zeroes": true, 00:05:14.997 "flush": true, 00:05:14.997 "reset": true, 00:05:14.997 "compare": false, 00:05:14.997 "compare_and_write": false, 00:05:14.997 "abort": true, 00:05:14.997 "nvme_admin": false, 00:05:14.997 "nvme_io": false 00:05:14.997 }, 00:05:14.997 "memory_domains": [ 00:05:14.997 { 00:05:14.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.997 "dma_device_type": 2 00:05:14.997 } 00:05:14.997 ], 00:05:14.997 "driver_specific": {} 00:05:14.997 }, 00:05:14.997 { 00:05:14.997 "name": "Passthru0", 00:05:14.997 "aliases": [ 00:05:14.997 "1136b8b0-f5f0-540f-af42-d79b04db5125" 00:05:14.997 ], 00:05:14.997 "product_name": "passthru", 00:05:14.997 "block_size": 512, 00:05:14.997 "num_blocks": 16384, 00:05:14.997 "uuid": "1136b8b0-f5f0-540f-af42-d79b04db5125", 00:05:14.997 "assigned_rate_limits": { 00:05:14.997 "rw_ios_per_sec": 0, 00:05:14.997 "rw_mbytes_per_sec": 0, 00:05:14.997 "r_mbytes_per_sec": 0, 00:05:14.997 "w_mbytes_per_sec": 0 00:05:14.997 }, 00:05:14.997 "claimed": false, 00:05:14.997 "zoned": false, 00:05:14.997 "supported_io_types": { 00:05:14.997 "read": true, 00:05:14.997 "write": true, 00:05:14.997 "unmap": true, 00:05:14.997 "write_zeroes": true, 00:05:14.997 "flush": true, 00:05:14.997 "reset": true, 00:05:14.997 "compare": false, 00:05:14.997 "compare_and_write": false, 00:05:14.997 "abort": true, 00:05:14.997 "nvme_admin": false, 00:05:14.997 "nvme_io": false 00:05:14.997 }, 00:05:14.997 "memory_domains": [ 00:05:14.997 { 00:05:14.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.997 "dma_device_type": 2 00:05:14.997 } 00:05:14.997 ], 00:05:14.997 "driver_specific": { 00:05:14.997 "passthru": { 00:05:14.997 "name": "Passthru0", 00:05:14.997 "base_bdev_name": "Malloc0" 00:05:14.997 } 00:05:14.997 } 00:05:14.997 } 00:05:14.997 ]' 00:05:14.997 11:28:44 -- rpc/rpc.sh@21 -- # jq length 00:05:14.997 11:28:44 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.997 11:28:44 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.997 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.997 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.997 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.997 11:28:44 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:14.997 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.997 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.997 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.997 11:28:44 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.997 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.997 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.997 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.997 11:28:44 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.997 11:28:44 -- rpc/rpc.sh@26 -- # jq length 00:05:14.997 11:28:44 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:14.997 00:05:14.997 real 0m0.293s 00:05:14.997 user 0m0.174s 00:05:14.997 sys 0m0.054s 00:05:14.997 11:28:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.997 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.997 ************************************ 00:05:14.997 END TEST rpc_integrity 00:05:14.997 ************************************ 00:05:14.997 11:28:44 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:14.997 11:28:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.997 11:28:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.997 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.997 ************************************ 00:05:14.997 START TEST rpc_plugins 00:05:14.997 ************************************ 00:05:14.997 11:28:44 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:14.997 11:28:44 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:14.997 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.997 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.997 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.997 11:28:44 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:14.997 11:28:44 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:14.997 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.997 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.255 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.255 11:28:44 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:15.255 { 00:05:15.255 "name": "Malloc1", 00:05:15.255 "aliases": [ 00:05:15.255 "969cb2d2-929d-4794-aa5d-4b8c045cb452" 00:05:15.255 ], 00:05:15.255 "product_name": "Malloc disk", 00:05:15.255 "block_size": 4096, 00:05:15.255 "num_blocks": 256, 00:05:15.255 "uuid": "969cb2d2-929d-4794-aa5d-4b8c045cb452", 00:05:15.255 "assigned_rate_limits": { 00:05:15.255 "rw_ios_per_sec": 0, 00:05:15.255 "rw_mbytes_per_sec": 0, 00:05:15.255 "r_mbytes_per_sec": 0, 00:05:15.255 "w_mbytes_per_sec": 0 00:05:15.255 }, 00:05:15.255 "claimed": false, 00:05:15.255 "zoned": false, 00:05:15.255 "supported_io_types": { 00:05:15.255 "read": true, 00:05:15.255 "write": true, 00:05:15.255 "unmap": true, 00:05:15.255 "write_zeroes": true, 00:05:15.255 "flush": true, 00:05:15.255 "reset": true, 00:05:15.255 "compare": false, 00:05:15.255 "compare_and_write": false, 00:05:15.255 "abort": true, 00:05:15.255 "nvme_admin": false, 00:05:15.255 "nvme_io": false 00:05:15.255 }, 00:05:15.255 "memory_domains": [ 00:05:15.255 { 00:05:15.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.255 "dma_device_type": 2 00:05:15.255 } 00:05:15.255 ], 00:05:15.255 "driver_specific": {} 00:05:15.255 } 00:05:15.255 ]' 00:05:15.255 11:28:44 -- rpc/rpc.sh@32 -- # jq length 00:05:15.255 11:28:44 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:15.255 11:28:44 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:15.255 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.255 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.255 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.255 11:28:44 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:15.255 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.255 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.255 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.255 11:28:44 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:15.255 11:28:44 -- rpc/rpc.sh@36 -- # jq length 00:05:15.255 11:28:44 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:15.255 00:05:15.255 real 0m0.127s 00:05:15.255 user 0m0.071s 00:05:15.255 sys 0m0.022s 00:05:15.255 11:28:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.255 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.255 ************************************ 00:05:15.255 END TEST rpc_plugins 00:05:15.255 ************************************ 00:05:15.255 11:28:44 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:15.255 11:28:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:15.255 11:28:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.255 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.255 ************************************ 00:05:15.255 START TEST rpc_trace_cmd_test 00:05:15.255 ************************************ 00:05:15.255 11:28:44 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:15.255 11:28:44 -- rpc/rpc.sh@40 -- # local info 00:05:15.255 11:28:44 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:15.255 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.255 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.255 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.255 11:28:44 -- rpc/rpc.sh@42 -- # info='{ 00:05:15.255 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2161058", 00:05:15.255 "tpoint_group_mask": "0x8", 00:05:15.255 "iscsi_conn": { 00:05:15.255 "mask": "0x2", 00:05:15.255 "tpoint_mask": "0x0" 00:05:15.255 }, 00:05:15.255 "scsi": { 00:05:15.255 "mask": "0x4", 00:05:15.255 "tpoint_mask": "0x0" 00:05:15.255 }, 00:05:15.255 "bdev": { 00:05:15.255 "mask": "0x8", 00:05:15.255 "tpoint_mask": "0xffffffffffffffff" 00:05:15.255 }, 00:05:15.255 "nvmf_rdma": { 00:05:15.255 "mask": "0x10", 00:05:15.255 "tpoint_mask": "0x0" 00:05:15.255 }, 00:05:15.255 "nvmf_tcp": { 00:05:15.255 "mask": "0x20", 00:05:15.255 "tpoint_mask": "0x0" 00:05:15.255 }, 00:05:15.255 "ftl": { 00:05:15.255 "mask": "0x40", 00:05:15.255 "tpoint_mask": "0x0" 00:05:15.255 }, 00:05:15.255 "blobfs": { 00:05:15.255 "mask": "0x80", 00:05:15.255 "tpoint_mask": "0x0" 00:05:15.255 }, 00:05:15.255 "dsa": { 00:05:15.255 "mask": "0x200", 00:05:15.255 "tpoint_mask": "0x0" 00:05:15.255 }, 00:05:15.255 "thread": { 00:05:15.255 "mask": "0x400", 00:05:15.255 "tpoint_mask": "0x0" 00:05:15.255 }, 00:05:15.255 "nvme_pcie": { 00:05:15.255 "mask": "0x800", 00:05:15.255 "tpoint_mask": "0x0" 00:05:15.255 }, 00:05:15.255 "iaa": { 00:05:15.255 "mask": "0x1000", 00:05:15.255 "tpoint_mask": "0x0" 00:05:15.255 }, 00:05:15.255 "nvme_tcp": { 00:05:15.255 "mask": "0x2000", 00:05:15.255 "tpoint_mask": "0x0" 00:05:15.255 }, 00:05:15.255 "bdev_nvme": { 00:05:15.255 "mask": "0x4000", 00:05:15.255 "tpoint_mask": "0x0" 00:05:15.255 } 00:05:15.255 }' 00:05:15.255 11:28:44 -- rpc/rpc.sh@43 -- # jq length 00:05:15.255 11:28:44 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:15.255 11:28:44 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:15.255 11:28:44 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:15.513 11:28:44 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:15.513 11:28:44 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:15.513 11:28:44 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:15.513 11:28:44 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:15.513 11:28:44 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:15.513 11:28:44 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:15.513 00:05:15.513 real 0m0.230s 00:05:15.513 user 0m0.184s 00:05:15.513 sys 0m0.039s 00:05:15.513 11:28:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.513 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.513 ************************************ 00:05:15.513 END TEST rpc_trace_cmd_test 00:05:15.513 ************************************ 00:05:15.513 11:28:44 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:15.513 11:28:44 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:15.513 11:28:44 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:15.513 11:28:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:15.513 11:28:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.513 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.513 ************************************ 00:05:15.513 START TEST rpc_daemon_integrity 00:05:15.513 ************************************ 00:05:15.513 11:28:44 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:15.513 11:28:44 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:15.513 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.513 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.513 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.513 11:28:44 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:15.513 11:28:44 -- rpc/rpc.sh@13 -- # jq length 00:05:15.513 11:28:44 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:15.513 11:28:44 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:15.513 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.513 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.513 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.513 11:28:44 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:15.513 11:28:44 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:15.513 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.513 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.513 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.771 11:28:44 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:15.771 { 00:05:15.771 "name": "Malloc2", 00:05:15.771 "aliases": [ 00:05:15.771 "d55b3821-465f-4aa5-bf62-ab9685ec781f" 00:05:15.771 ], 00:05:15.771 "product_name": "Malloc disk", 00:05:15.771 "block_size": 512, 00:05:15.771 "num_blocks": 16384, 00:05:15.771 "uuid": "d55b3821-465f-4aa5-bf62-ab9685ec781f", 00:05:15.771 "assigned_rate_limits": { 00:05:15.771 "rw_ios_per_sec": 0, 00:05:15.771 "rw_mbytes_per_sec": 0, 00:05:15.771 "r_mbytes_per_sec": 0, 00:05:15.771 "w_mbytes_per_sec": 0 00:05:15.771 }, 00:05:15.771 "claimed": false, 00:05:15.771 "zoned": false, 00:05:15.771 "supported_io_types": { 00:05:15.771 "read": true, 00:05:15.771 "write": true, 00:05:15.771 "unmap": true, 00:05:15.771 "write_zeroes": true, 00:05:15.771 "flush": true, 00:05:15.771 "reset": true, 00:05:15.771 "compare": false, 00:05:15.771 "compare_and_write": false, 00:05:15.771 "abort": true, 00:05:15.771 "nvme_admin": false, 00:05:15.771 "nvme_io": false 00:05:15.771 }, 00:05:15.771 "memory_domains": [ 00:05:15.771 { 00:05:15.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.771 "dma_device_type": 2 00:05:15.771 } 00:05:15.771 ], 00:05:15.771 "driver_specific": {} 00:05:15.771 } 00:05:15.771 ]' 00:05:15.771 11:28:44 -- rpc/rpc.sh@17 -- # jq length 00:05:15.771 11:28:44 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:15.771 11:28:44 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:15.771 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.771 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.771 [2024-07-21 11:28:44.964555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:15.771 [2024-07-21 11:28:44.964587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:15.771 [2024-07-21 11:28:44.964602] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19d26b0 00:05:15.771 [2024-07-21 11:28:44.964611] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:15.771 [2024-07-21 11:28:44.965519] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:15.771 [2024-07-21 11:28:44.965542] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:15.771 Passthru0 00:05:15.771 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.771 11:28:44 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:15.771 11:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.771 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.771 11:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.771 11:28:44 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:15.771 { 00:05:15.771 "name": "Malloc2", 00:05:15.771 "aliases": [ 00:05:15.771 "d55b3821-465f-4aa5-bf62-ab9685ec781f" 00:05:15.771 ], 00:05:15.771 "product_name": "Malloc disk", 00:05:15.771 "block_size": 512, 00:05:15.771 "num_blocks": 16384, 00:05:15.771 "uuid": "d55b3821-465f-4aa5-bf62-ab9685ec781f", 00:05:15.771 "assigned_rate_limits": { 00:05:15.771 "rw_ios_per_sec": 0, 00:05:15.771 "rw_mbytes_per_sec": 0, 00:05:15.771 "r_mbytes_per_sec": 0, 00:05:15.771 "w_mbytes_per_sec": 0 00:05:15.771 }, 00:05:15.771 "claimed": true, 00:05:15.771 "claim_type": "exclusive_write", 00:05:15.771 "zoned": false, 00:05:15.771 "supported_io_types": { 00:05:15.771 "read": true, 00:05:15.771 "write": true, 00:05:15.771 "unmap": true, 00:05:15.771 "write_zeroes": true, 00:05:15.771 "flush": true, 00:05:15.771 "reset": true, 00:05:15.771 "compare": false, 00:05:15.771 "compare_and_write": false, 00:05:15.771 "abort": true, 00:05:15.771 "nvme_admin": false, 00:05:15.771 "nvme_io": false 00:05:15.771 }, 00:05:15.771 "memory_domains": [ 00:05:15.771 { 00:05:15.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.771 "dma_device_type": 2 00:05:15.771 } 00:05:15.771 ], 00:05:15.771 "driver_specific": {} 00:05:15.771 }, 00:05:15.771 { 00:05:15.771 "name": "Passthru0", 00:05:15.771 "aliases": [ 00:05:15.771 "1029fa37-f20e-52d0-9880-f08499da5fb1" 00:05:15.771 ], 00:05:15.771 "product_name": "passthru", 00:05:15.771 "block_size": 512, 00:05:15.771 "num_blocks": 16384, 00:05:15.771 "uuid": "1029fa37-f20e-52d0-9880-f08499da5fb1", 00:05:15.771 "assigned_rate_limits": { 00:05:15.771 "rw_ios_per_sec": 0, 00:05:15.771 "rw_mbytes_per_sec": 0, 00:05:15.771 "r_mbytes_per_sec": 0, 00:05:15.771 "w_mbytes_per_sec": 0 00:05:15.771 }, 00:05:15.771 "claimed": false, 00:05:15.771 "zoned": false, 00:05:15.771 "supported_io_types": { 00:05:15.771 "read": true, 00:05:15.771 "write": true, 00:05:15.771 "unmap": true, 00:05:15.771 "write_zeroes": true, 00:05:15.771 "flush": true, 00:05:15.771 "reset": true, 00:05:15.771 "compare": false, 00:05:15.771 "compare_and_write": false, 00:05:15.771 "abort": true, 00:05:15.771 "nvme_admin": false, 00:05:15.771 "nvme_io": false 00:05:15.771 }, 00:05:15.771 "memory_domains": [ 00:05:15.771 { 00:05:15.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.771 "dma_device_type": 2 00:05:15.771 } 00:05:15.771 ], 00:05:15.771 "driver_specific": { 00:05:15.771 "passthru": { 00:05:15.771 "name": "Passthru0", 00:05:15.771 "base_bdev_name": "Malloc2" 00:05:15.771 } 00:05:15.771 } 00:05:15.771 } 00:05:15.771 ]' 00:05:15.771 11:28:45 -- rpc/rpc.sh@21 -- # jq length 00:05:15.771 11:28:45 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:15.771 11:28:45 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:15.771 11:28:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.771 11:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:15.771 11:28:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.771 11:28:45 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:15.771 11:28:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.771 11:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:15.771 11:28:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.771 11:28:45 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:15.771 11:28:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.771 11:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:15.771 11:28:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.771 11:28:45 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:15.771 11:28:45 -- rpc/rpc.sh@26 -- # jq length 00:05:15.771 11:28:45 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:15.771 00:05:15.771 real 0m0.267s 00:05:15.771 user 0m0.152s 00:05:15.771 sys 0m0.056s 00:05:15.771 11:28:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.771 11:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:15.771 ************************************ 00:05:15.771 END TEST rpc_daemon_integrity 00:05:15.771 ************************************ 00:05:15.771 11:28:45 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:15.771 11:28:45 -- rpc/rpc.sh@84 -- # killprocess 2161058 00:05:15.771 11:28:45 -- common/autotest_common.sh@926 -- # '[' -z 2161058 ']' 00:05:15.771 11:28:45 -- common/autotest_common.sh@930 -- # kill -0 2161058 00:05:15.771 11:28:45 -- common/autotest_common.sh@931 -- # uname 00:05:15.771 11:28:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:15.771 11:28:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2161058 00:05:16.028 11:28:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:16.028 11:28:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:16.028 11:28:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2161058' 00:05:16.028 killing process with pid 2161058 00:05:16.028 11:28:45 -- common/autotest_common.sh@945 -- # kill 2161058 00:05:16.028 11:28:45 -- common/autotest_common.sh@950 -- # wait 2161058 00:05:16.286 00:05:16.286 real 0m2.410s 00:05:16.286 user 0m3.016s 00:05:16.286 sys 0m0.758s 00:05:16.286 11:28:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.286 11:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:16.286 ************************************ 00:05:16.286 END TEST rpc 00:05:16.286 ************************************ 00:05:16.286 11:28:45 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:16.286 11:28:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.286 11:28:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.286 11:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:16.286 ************************************ 00:05:16.286 START TEST rpc_client 00:05:16.286 ************************************ 00:05:16.286 11:28:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:16.286 * Looking for test storage... 00:05:16.286 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:16.286 11:28:45 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:16.286 OK 00:05:16.286 11:28:45 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:16.286 00:05:16.286 real 0m0.118s 00:05:16.286 user 0m0.057s 00:05:16.286 sys 0m0.071s 00:05:16.286 11:28:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.286 11:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:16.286 ************************************ 00:05:16.286 END TEST rpc_client 00:05:16.286 ************************************ 00:05:16.597 11:28:45 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:16.597 11:28:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.597 11:28:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.597 11:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:16.597 ************************************ 00:05:16.597 START TEST json_config 00:05:16.597 ************************************ 00:05:16.597 11:28:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:16.597 11:28:45 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:16.597 11:28:45 -- nvmf/common.sh@7 -- # uname -s 00:05:16.597 11:28:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:16.597 11:28:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:16.597 11:28:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:16.597 11:28:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:16.597 11:28:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:16.597 11:28:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:16.597 11:28:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:16.597 11:28:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:16.597 11:28:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:16.597 11:28:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:16.597 11:28:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:16.597 11:28:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:16.597 11:28:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:16.597 11:28:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:16.597 11:28:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:16.597 11:28:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:16.597 11:28:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:16.597 11:28:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:16.597 11:28:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:16.597 11:28:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.597 11:28:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.597 11:28:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.597 11:28:45 -- paths/export.sh@5 -- # export PATH 00:05:16.598 11:28:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.598 11:28:45 -- nvmf/common.sh@46 -- # : 0 00:05:16.598 11:28:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:16.598 11:28:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:16.598 11:28:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:16.598 11:28:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:16.598 11:28:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:16.598 11:28:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:16.598 11:28:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:16.598 11:28:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:16.598 11:28:45 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:16.598 11:28:45 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:16.598 11:28:45 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:16.598 11:28:45 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:16.598 11:28:45 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:16.598 11:28:45 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:16.598 11:28:45 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:16.598 11:28:45 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:16.598 11:28:45 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:16.598 11:28:45 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:16.598 11:28:45 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:16.598 11:28:45 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:16.598 11:28:45 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:16.598 11:28:45 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:16.598 11:28:45 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:16.598 INFO: JSON configuration test init 00:05:16.598 11:28:45 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:16.598 11:28:45 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:16.598 11:28:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:16.598 11:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:16.598 11:28:45 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:16.598 11:28:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:16.598 11:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:16.598 11:28:45 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:16.598 11:28:45 -- json_config/json_config.sh@98 -- # local app=target 00:05:16.598 11:28:45 -- json_config/json_config.sh@99 -- # shift 00:05:16.598 11:28:45 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:16.598 11:28:45 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:16.598 11:28:45 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:16.598 11:28:45 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:16.598 11:28:45 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:16.598 11:28:45 -- json_config/json_config.sh@111 -- # app_pid[$app]=2162677 00:05:16.598 11:28:45 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:16.598 Waiting for target to run... 00:05:16.598 11:28:45 -- json_config/json_config.sh@114 -- # waitforlisten 2162677 /var/tmp/spdk_tgt.sock 00:05:16.598 11:28:45 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:16.598 11:28:45 -- common/autotest_common.sh@819 -- # '[' -z 2162677 ']' 00:05:16.598 11:28:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:16.598 11:28:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:16.598 11:28:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:16.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:16.598 11:28:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:16.598 11:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:16.598 [2024-07-21 11:28:45.896088] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:16.598 [2024-07-21 11:28:45.896147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162677 ] 00:05:16.598 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.855 [2024-07-21 11:28:46.188188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.855 [2024-07-21 11:28:46.209129] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:16.855 [2024-07-21 11:28:46.209227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.422 11:28:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:17.422 11:28:46 -- common/autotest_common.sh@852 -- # return 0 00:05:17.422 11:28:46 -- json_config/json_config.sh@115 -- # echo '' 00:05:17.422 00:05:17.422 11:28:46 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:17.422 11:28:46 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:17.422 11:28:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:17.422 11:28:46 -- common/autotest_common.sh@10 -- # set +x 00:05:17.422 11:28:46 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:17.422 11:28:46 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:17.422 11:28:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:17.422 11:28:46 -- common/autotest_common.sh@10 -- # set +x 00:05:17.422 11:28:46 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:17.422 11:28:46 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:17.422 11:28:46 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:20.711 11:28:49 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:20.712 11:28:49 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:20.712 11:28:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:20.712 11:28:49 -- common/autotest_common.sh@10 -- # set +x 00:05:20.712 11:28:49 -- json_config/json_config.sh@48 -- # local ret=0 00:05:20.712 11:28:49 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:20.712 11:28:49 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:20.712 11:28:49 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:20.712 11:28:49 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:20.712 11:28:49 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:20.712 11:28:49 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:20.712 11:28:49 -- json_config/json_config.sh@51 -- # local get_types 00:05:20.712 11:28:49 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:20.712 11:28:49 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:20.712 11:28:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:20.712 11:28:49 -- common/autotest_common.sh@10 -- # set +x 00:05:20.712 11:28:50 -- json_config/json_config.sh@58 -- # return 0 00:05:20.712 11:28:50 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:20.712 11:28:50 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:20.712 11:28:50 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:20.712 11:28:50 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:20.712 11:28:50 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:20.712 11:28:50 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:20.712 11:28:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:20.712 11:28:50 -- common/autotest_common.sh@10 -- # set +x 00:05:20.712 11:28:50 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:20.712 11:28:50 -- json_config/json_config.sh@286 -- # [[ rdma == \r\d\m\a ]] 00:05:20.712 11:28:50 -- json_config/json_config.sh@287 -- # TEST_TRANSPORT=rdma 00:05:20.712 11:28:50 -- json_config/json_config.sh@287 -- # nvmftestinit 00:05:20.712 11:28:50 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:05:20.712 11:28:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:20.712 11:28:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:05:20.712 11:28:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:05:20.712 11:28:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:05:20.712 11:28:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:20.712 11:28:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:20.712 11:28:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:20.712 11:28:50 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:05:20.712 11:28:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:05:20.712 11:28:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:05:20.712 11:28:50 -- common/autotest_common.sh@10 -- # set +x 00:05:28.819 11:28:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:28.819 11:28:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:05:28.819 11:28:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:05:28.819 11:28:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:05:28.819 11:28:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:05:28.819 11:28:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:05:28.819 11:28:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:05:28.819 11:28:58 -- nvmf/common.sh@294 -- # net_devs=() 00:05:28.819 11:28:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:05:28.819 11:28:58 -- nvmf/common.sh@295 -- # e810=() 00:05:28.819 11:28:58 -- nvmf/common.sh@295 -- # local -ga e810 00:05:28.819 11:28:58 -- nvmf/common.sh@296 -- # x722=() 00:05:28.819 11:28:58 -- nvmf/common.sh@296 -- # local -ga x722 00:05:28.819 11:28:58 -- nvmf/common.sh@297 -- # mlx=() 00:05:28.819 11:28:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:05:28.819 11:28:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:28.819 11:28:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:28.819 11:28:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:28.819 11:28:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:28.819 11:28:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:28.819 11:28:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:28.819 11:28:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:28.819 11:28:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:28.819 11:28:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:28.819 11:28:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:28.819 11:28:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:28.819 11:28:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:05:28.819 11:28:58 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:05:28.819 11:28:58 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:05:28.819 11:28:58 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:05:28.819 11:28:58 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:05:28.819 11:28:58 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:05:28.819 11:28:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:05:28.819 11:28:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:28.819 11:28:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:28.819 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:28.819 11:28:58 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:05:28.819 11:28:58 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:05:28.819 11:28:58 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:28.819 11:28:58 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:28.819 11:28:58 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:28.819 11:28:58 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:28.819 11:28:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:28.819 11:28:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:28.819 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:28.819 11:28:58 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:05:28.819 11:28:58 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:05:28.819 11:28:58 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:28.819 11:28:58 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:28.819 11:28:58 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:28.819 11:28:58 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:28.819 11:28:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:05:28.819 11:28:58 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:05:28.819 11:28:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:28.819 11:28:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.819 11:28:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:28.819 11:28:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.819 11:28:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:28.819 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:28.819 11:28:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.819 11:28:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:28.820 11:28:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.820 11:28:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:28.820 11:28:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.820 11:28:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:28.820 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:28.820 11:28:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.820 11:28:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:05:28.820 11:28:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:05:28.820 11:28:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:05:28.820 11:28:58 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:05:28.820 11:28:58 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:05:28.820 11:28:58 -- nvmf/common.sh@408 -- # rdma_device_init 00:05:28.820 11:28:58 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:05:28.820 11:28:58 -- nvmf/common.sh@57 -- # uname 00:05:28.820 11:28:58 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:05:28.820 11:28:58 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:05:28.820 11:28:58 -- nvmf/common.sh@62 -- # modprobe ib_core 00:05:28.820 11:28:58 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:05:28.820 11:28:58 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:05:28.820 11:28:58 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:05:28.820 11:28:58 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:05:28.820 11:28:58 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:05:28.820 11:28:58 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:05:28.820 11:28:58 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:28.820 11:28:58 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:05:28.820 11:28:58 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:28.820 11:28:58 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:28.820 11:28:58 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:28.820 11:28:58 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:28.820 11:28:58 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:28.820 11:28:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:28.820 11:28:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:28.820 11:28:58 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:28.820 11:28:58 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:05:28.820 11:28:58 -- nvmf/common.sh@104 -- # continue 2 00:05:28.820 11:28:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:28.820 11:28:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:28.820 11:28:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:28.820 11:28:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:28.820 11:28:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:28.820 11:28:58 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:05:28.820 11:28:58 -- nvmf/common.sh@104 -- # continue 2 00:05:28.820 11:28:58 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:28.820 11:28:58 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:05:28.820 11:28:58 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:05:28.820 11:28:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:05:28.820 11:28:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:28.820 11:28:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:28.820 11:28:58 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:05:28.820 11:28:58 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:05:28.820 11:28:58 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:05:28.820 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:28.820 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:28.820 altname enp217s0f0np0 00:05:28.820 altname ens818f0np0 00:05:28.820 inet 192.168.100.8/24 scope global mlx_0_0 00:05:28.820 valid_lft forever preferred_lft forever 00:05:28.820 11:28:58 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:28.820 11:28:58 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:05:28.820 11:28:58 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:05:28.820 11:28:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:05:28.820 11:28:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:28.820 11:28:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:29.098 11:28:58 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:05:29.098 11:28:58 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:05:29.098 11:28:58 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:05:29.098 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:29.098 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:29.098 altname enp217s0f1np1 00:05:29.098 altname ens818f1np1 00:05:29.098 inet 192.168.100.9/24 scope global mlx_0_1 00:05:29.098 valid_lft forever preferred_lft forever 00:05:29.098 11:28:58 -- nvmf/common.sh@410 -- # return 0 00:05:29.098 11:28:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:05:29.098 11:28:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:29.098 11:28:58 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:05:29.098 11:28:58 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:05:29.098 11:28:58 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:05:29.098 11:28:58 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:29.098 11:28:58 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:29.098 11:28:58 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:29.098 11:28:58 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:29.098 11:28:58 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:29.098 11:28:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:29.098 11:28:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.098 11:28:58 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:29.098 11:28:58 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:05:29.098 11:28:58 -- nvmf/common.sh@104 -- # continue 2 00:05:29.098 11:28:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:29.098 11:28:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.098 11:28:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:29.098 11:28:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.098 11:28:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:29.098 11:28:58 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:05:29.098 11:28:58 -- nvmf/common.sh@104 -- # continue 2 00:05:29.098 11:28:58 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:29.098 11:28:58 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:05:29.098 11:28:58 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:05:29.098 11:28:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:29.098 11:28:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:05:29.098 11:28:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:29.098 11:28:58 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:29.098 11:28:58 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:05:29.098 11:28:58 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:05:29.098 11:28:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:29.098 11:28:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:05:29.098 11:28:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:29.098 11:28:58 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:05:29.098 192.168.100.9' 00:05:29.098 11:28:58 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:05:29.098 192.168.100.9' 00:05:29.098 11:28:58 -- nvmf/common.sh@445 -- # head -n 1 00:05:29.098 11:28:58 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:29.098 11:28:58 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:05:29.098 192.168.100.9' 00:05:29.098 11:28:58 -- nvmf/common.sh@446 -- # tail -n +2 00:05:29.098 11:28:58 -- nvmf/common.sh@446 -- # head -n 1 00:05:29.098 11:28:58 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:29.098 11:28:58 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:05:29.098 11:28:58 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:29.098 11:28:58 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:05:29.098 11:28:58 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:05:29.098 11:28:58 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:05:29.098 11:28:58 -- json_config/json_config.sh@290 -- # [[ -z 192.168.100.8 ]] 00:05:29.098 11:28:58 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.098 11:28:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.373 MallocForNvmf0 00:05:29.373 11:28:58 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.373 11:28:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.373 MallocForNvmf1 00:05:29.373 11:28:58 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:29.373 11:28:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:29.630 [2024-07-21 11:28:58.842520] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:29.630 [2024-07-21 11:28:58.874599] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x167c480/0x1689a00) succeed. 00:05:29.630 [2024-07-21 11:28:58.886270] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x167e670/0x1709a40) succeed. 00:05:29.630 11:28:58 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.630 11:28:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.887 11:28:59 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.887 11:28:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.887 11:28:59 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.887 11:28:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:30.145 11:28:59 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:30.145 11:28:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:30.145 [2024-07-21 11:28:59.550134] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:30.402 11:28:59 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:30.402 11:28:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:30.402 11:28:59 -- common/autotest_common.sh@10 -- # set +x 00:05:30.402 11:28:59 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:30.402 11:28:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:30.402 11:28:59 -- common/autotest_common.sh@10 -- # set +x 00:05:30.402 11:28:59 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:30.402 11:28:59 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.402 11:28:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.402 MallocBdevForConfigChangeCheck 00:05:30.659 11:28:59 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:30.659 11:28:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:30.659 11:28:59 -- common/autotest_common.sh@10 -- # set +x 00:05:30.659 11:28:59 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:30.659 11:28:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.915 11:29:00 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:30.915 INFO: shutting down applications... 00:05:30.915 11:29:00 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:30.915 11:29:00 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:30.915 11:29:00 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:30.915 11:29:00 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:33.438 Calling clear_iscsi_subsystem 00:05:33.438 Calling clear_nvmf_subsystem 00:05:33.438 Calling clear_nbd_subsystem 00:05:33.438 Calling clear_ublk_subsystem 00:05:33.438 Calling clear_vhost_blk_subsystem 00:05:33.438 Calling clear_vhost_scsi_subsystem 00:05:33.438 Calling clear_scheduler_subsystem 00:05:33.438 Calling clear_bdev_subsystem 00:05:33.438 Calling clear_accel_subsystem 00:05:33.438 Calling clear_vmd_subsystem 00:05:33.438 Calling clear_sock_subsystem 00:05:33.438 Calling clear_iobuf_subsystem 00:05:33.438 11:29:02 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:33.438 11:29:02 -- json_config/json_config.sh@396 -- # count=100 00:05:33.438 11:29:02 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:33.438 11:29:02 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:33.438 11:29:02 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.438 11:29:02 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:33.695 11:29:03 -- json_config/json_config.sh@398 -- # break 00:05:33.695 11:29:03 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:33.695 11:29:03 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:33.695 11:29:03 -- json_config/json_config.sh@120 -- # local app=target 00:05:33.695 11:29:03 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:33.695 11:29:03 -- json_config/json_config.sh@124 -- # [[ -n 2162677 ]] 00:05:33.695 11:29:03 -- json_config/json_config.sh@127 -- # kill -SIGINT 2162677 00:05:33.695 11:29:03 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:33.695 11:29:03 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:33.695 11:29:03 -- json_config/json_config.sh@130 -- # kill -0 2162677 00:05:33.695 11:29:03 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:34.259 11:29:03 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:34.260 11:29:03 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:34.260 11:29:03 -- json_config/json_config.sh@130 -- # kill -0 2162677 00:05:34.260 11:29:03 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:34.260 11:29:03 -- json_config/json_config.sh@132 -- # break 00:05:34.260 11:29:03 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:34.260 11:29:03 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:34.260 SPDK target shutdown done 00:05:34.260 11:29:03 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:34.260 INFO: relaunching applications... 00:05:34.260 11:29:03 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.260 11:29:03 -- json_config/json_config.sh@98 -- # local app=target 00:05:34.260 11:29:03 -- json_config/json_config.sh@99 -- # shift 00:05:34.260 11:29:03 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:34.260 11:29:03 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:34.260 11:29:03 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:34.260 11:29:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:34.260 11:29:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:34.260 11:29:03 -- json_config/json_config.sh@111 -- # app_pid[$app]=2171585 00:05:34.260 11:29:03 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:34.260 Waiting for target to run... 00:05:34.260 11:29:03 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.260 11:29:03 -- json_config/json_config.sh@114 -- # waitforlisten 2171585 /var/tmp/spdk_tgt.sock 00:05:34.260 11:29:03 -- common/autotest_common.sh@819 -- # '[' -z 2171585 ']' 00:05:34.260 11:29:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.260 11:29:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:34.260 11:29:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.260 11:29:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:34.260 11:29:03 -- common/autotest_common.sh@10 -- # set +x 00:05:34.260 [2024-07-21 11:29:03.563210] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:34.260 [2024-07-21 11:29:03.563265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171585 ] 00:05:34.260 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.517 [2024-07-21 11:29:03.863829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.517 [2024-07-21 11:29:03.884460] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.517 [2024-07-21 11:29:03.884554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.793 [2024-07-21 11:29:06.911864] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ae47d0/0x194ee60) succeed. 00:05:37.793 [2024-07-21 11:29:06.922758] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ae49a0/0x19cef00) succeed. 00:05:37.793 [2024-07-21 11:29:06.971125] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:38.357 11:29:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:38.357 11:29:07 -- common/autotest_common.sh@852 -- # return 0 00:05:38.357 11:29:07 -- json_config/json_config.sh@115 -- # echo '' 00:05:38.357 00:05:38.357 11:29:07 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:38.357 11:29:07 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:38.357 INFO: Checking if target configuration is the same... 00:05:38.357 11:29:07 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.357 11:29:07 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:38.357 11:29:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.357 + '[' 2 -ne 2 ']' 00:05:38.357 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:38.357 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:38.357 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:38.357 +++ basename /dev/fd/62 00:05:38.357 ++ mktemp /tmp/62.XXX 00:05:38.357 + tmp_file_1=/tmp/62.YJI 00:05:38.357 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.357 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.357 + tmp_file_2=/tmp/spdk_tgt_config.json.CmD 00:05:38.357 + ret=0 00:05:38.357 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.613 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.613 + diff -u /tmp/62.YJI /tmp/spdk_tgt_config.json.CmD 00:05:38.613 + echo 'INFO: JSON config files are the same' 00:05:38.613 INFO: JSON config files are the same 00:05:38.613 + rm /tmp/62.YJI /tmp/spdk_tgt_config.json.CmD 00:05:38.613 + exit 0 00:05:38.613 11:29:07 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:38.613 11:29:07 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:38.613 INFO: changing configuration and checking if this can be detected... 00:05:38.613 11:29:07 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.613 11:29:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.870 11:29:08 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.870 11:29:08 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:38.870 11:29:08 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.870 + '[' 2 -ne 2 ']' 00:05:38.870 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:38.870 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:38.870 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:38.870 +++ basename /dev/fd/62 00:05:38.870 ++ mktemp /tmp/62.XXX 00:05:38.870 + tmp_file_1=/tmp/62.CcS 00:05:38.870 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.870 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.870 + tmp_file_2=/tmp/spdk_tgt_config.json.BMU 00:05:38.870 + ret=0 00:05:38.870 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.126 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.126 + diff -u /tmp/62.CcS /tmp/spdk_tgt_config.json.BMU 00:05:39.126 + ret=1 00:05:39.126 + echo '=== Start of file: /tmp/62.CcS ===' 00:05:39.126 + cat /tmp/62.CcS 00:05:39.126 + echo '=== End of file: /tmp/62.CcS ===' 00:05:39.126 + echo '' 00:05:39.126 + echo '=== Start of file: /tmp/spdk_tgt_config.json.BMU ===' 00:05:39.126 + cat /tmp/spdk_tgt_config.json.BMU 00:05:39.126 + echo '=== End of file: /tmp/spdk_tgt_config.json.BMU ===' 00:05:39.126 + echo '' 00:05:39.126 + rm /tmp/62.CcS /tmp/spdk_tgt_config.json.BMU 00:05:39.126 + exit 1 00:05:39.126 11:29:08 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:39.126 INFO: configuration change detected. 00:05:39.126 11:29:08 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:39.126 11:29:08 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:39.126 11:29:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:39.126 11:29:08 -- common/autotest_common.sh@10 -- # set +x 00:05:39.126 11:29:08 -- json_config/json_config.sh@360 -- # local ret=0 00:05:39.126 11:29:08 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:39.126 11:29:08 -- json_config/json_config.sh@370 -- # [[ -n 2171585 ]] 00:05:39.126 11:29:08 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:39.126 11:29:08 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:39.126 11:29:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:39.126 11:29:08 -- common/autotest_common.sh@10 -- # set +x 00:05:39.126 11:29:08 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:39.126 11:29:08 -- json_config/json_config.sh@246 -- # uname -s 00:05:39.126 11:29:08 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:39.126 11:29:08 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:39.126 11:29:08 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:39.126 11:29:08 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:39.126 11:29:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:39.126 11:29:08 -- common/autotest_common.sh@10 -- # set +x 00:05:39.383 11:29:08 -- json_config/json_config.sh@376 -- # killprocess 2171585 00:05:39.383 11:29:08 -- common/autotest_common.sh@926 -- # '[' -z 2171585 ']' 00:05:39.383 11:29:08 -- common/autotest_common.sh@930 -- # kill -0 2171585 00:05:39.383 11:29:08 -- common/autotest_common.sh@931 -- # uname 00:05:39.383 11:29:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:39.383 11:29:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2171585 00:05:39.383 11:29:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:39.383 11:29:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:39.383 11:29:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2171585' 00:05:39.383 killing process with pid 2171585 00:05:39.383 11:29:08 -- common/autotest_common.sh@945 -- # kill 2171585 00:05:39.383 11:29:08 -- common/autotest_common.sh@950 -- # wait 2171585 00:05:41.911 11:29:11 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.911 11:29:11 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:41.911 11:29:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:41.911 11:29:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.911 11:29:11 -- json_config/json_config.sh@381 -- # return 0 00:05:41.911 11:29:11 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:41.911 INFO: Success 00:05:41.911 11:29:11 -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:41.911 11:29:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:05:41.911 11:29:11 -- nvmf/common.sh@116 -- # sync 00:05:41.911 11:29:11 -- nvmf/common.sh@118 -- # '[' '' == tcp ']' 00:05:41.911 11:29:11 -- nvmf/common.sh@118 -- # '[' '' == rdma ']' 00:05:41.911 11:29:11 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:05:41.911 11:29:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:05:41.911 11:29:11 -- nvmf/common.sh@483 -- # [[ '' == \t\c\p ]] 00:05:41.911 00:05:41.911 real 0m25.435s 00:05:41.911 user 0m28.505s 00:05:41.911 sys 0m8.503s 00:05:41.911 11:29:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.911 11:29:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.911 ************************************ 00:05:41.911 END TEST json_config 00:05:41.911 ************************************ 00:05:41.911 11:29:11 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.911 11:29:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.911 11:29:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.911 11:29:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.911 ************************************ 00:05:41.911 START TEST json_config_extra_key 00:05:41.911 ************************************ 00:05:41.911 11:29:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.911 11:29:11 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.911 11:29:11 -- nvmf/common.sh@7 -- # uname -s 00:05:41.911 11:29:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.911 11:29:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.911 11:29:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.911 11:29:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.911 11:29:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.911 11:29:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.911 11:29:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.911 11:29:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.911 11:29:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.911 11:29:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.911 11:29:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:41.911 11:29:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:41.911 11:29:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.911 11:29:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.911 11:29:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.911 11:29:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:41.911 11:29:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.911 11:29:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.911 11:29:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.911 11:29:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.911 11:29:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.911 11:29:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.911 11:29:11 -- paths/export.sh@5 -- # export PATH 00:05:41.911 11:29:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.911 11:29:11 -- nvmf/common.sh@46 -- # : 0 00:05:41.911 11:29:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:41.911 11:29:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:41.911 11:29:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:41.911 11:29:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.911 11:29:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.911 11:29:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:41.911 11:29:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:41.911 11:29:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:41.911 11:29:11 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:41.911 11:29:11 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:41.911 11:29:11 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:41.911 11:29:11 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:41.912 11:29:11 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:41.912 11:29:11 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:41.912 11:29:11 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:41.912 11:29:11 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:41.912 11:29:11 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.912 11:29:11 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:41.912 INFO: launching applications... 00:05:41.912 11:29:11 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.912 11:29:11 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:41.912 11:29:11 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:41.912 11:29:11 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:41.912 11:29:11 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:41.912 11:29:11 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=2173060 00:05:41.912 11:29:11 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:41.912 Waiting for target to run... 00:05:41.912 11:29:11 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 2173060 /var/tmp/spdk_tgt.sock 00:05:41.912 11:29:11 -- common/autotest_common.sh@819 -- # '[' -z 2173060 ']' 00:05:41.912 11:29:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.912 11:29:11 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.912 11:29:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:41.912 11:29:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.912 11:29:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:41.912 11:29:11 -- common/autotest_common.sh@10 -- # set +x 00:05:42.170 [2024-07-21 11:29:11.363727] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:42.170 [2024-07-21 11:29:11.363787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2173060 ] 00:05:42.170 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.439 [2024-07-21 11:29:11.661037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.439 [2024-07-21 11:29:11.681397] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.439 [2024-07-21 11:29:11.681496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.007 11:29:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:43.007 11:29:12 -- common/autotest_common.sh@852 -- # return 0 00:05:43.007 11:29:12 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:43.007 00:05:43.007 11:29:12 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:43.007 INFO: shutting down applications... 00:05:43.007 11:29:12 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:43.007 11:29:12 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:43.007 11:29:12 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:43.007 11:29:12 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 2173060 ]] 00:05:43.007 11:29:12 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 2173060 00:05:43.007 11:29:12 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:43.007 11:29:12 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:43.007 11:29:12 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2173060 00:05:43.007 11:29:12 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:43.265 11:29:12 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:43.265 11:29:12 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:43.265 11:29:12 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2173060 00:05:43.265 11:29:12 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:43.265 11:29:12 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:43.265 11:29:12 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:43.265 11:29:12 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:43.265 SPDK target shutdown done 00:05:43.265 11:29:12 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:43.265 Success 00:05:43.265 00:05:43.265 real 0m1.438s 00:05:43.265 user 0m1.136s 00:05:43.265 sys 0m0.424s 00:05:43.265 11:29:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.265 11:29:12 -- common/autotest_common.sh@10 -- # set +x 00:05:43.265 ************************************ 00:05:43.265 END TEST json_config_extra_key 00:05:43.265 ************************************ 00:05:43.522 11:29:12 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.522 11:29:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.522 11:29:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.522 11:29:12 -- common/autotest_common.sh@10 -- # set +x 00:05:43.522 ************************************ 00:05:43.522 START TEST alias_rpc 00:05:43.522 ************************************ 00:05:43.522 11:29:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.522 * Looking for test storage... 00:05:43.522 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:43.522 11:29:12 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:43.522 11:29:12 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2173376 00:05:43.522 11:29:12 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2173376 00:05:43.522 11:29:12 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.522 11:29:12 -- common/autotest_common.sh@819 -- # '[' -z 2173376 ']' 00:05:43.522 11:29:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.522 11:29:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.522 11:29:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.522 11:29:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.522 11:29:12 -- common/autotest_common.sh@10 -- # set +x 00:05:43.522 [2024-07-21 11:29:12.841042] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:43.522 [2024-07-21 11:29:12.841101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2173376 ] 00:05:43.522 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.522 [2024-07-21 11:29:12.926333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.779 [2024-07-21 11:29:12.964886] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.779 [2024-07-21 11:29:12.964995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.342 11:29:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.342 11:29:13 -- common/autotest_common.sh@852 -- # return 0 00:05:44.342 11:29:13 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:44.600 11:29:13 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2173376 00:05:44.600 11:29:13 -- common/autotest_common.sh@926 -- # '[' -z 2173376 ']' 00:05:44.600 11:29:13 -- common/autotest_common.sh@930 -- # kill -0 2173376 00:05:44.600 11:29:13 -- common/autotest_common.sh@931 -- # uname 00:05:44.600 11:29:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:44.600 11:29:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2173376 00:05:44.600 11:29:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:44.600 11:29:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:44.600 11:29:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2173376' 00:05:44.600 killing process with pid 2173376 00:05:44.600 11:29:13 -- common/autotest_common.sh@945 -- # kill 2173376 00:05:44.600 11:29:13 -- common/autotest_common.sh@950 -- # wait 2173376 00:05:44.857 00:05:44.857 real 0m1.484s 00:05:44.857 user 0m1.567s 00:05:44.857 sys 0m0.462s 00:05:44.857 11:29:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.857 11:29:14 -- common/autotest_common.sh@10 -- # set +x 00:05:44.857 ************************************ 00:05:44.857 END TEST alias_rpc 00:05:44.857 ************************************ 00:05:44.857 11:29:14 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:44.857 11:29:14 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:44.857 11:29:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.857 11:29:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.857 11:29:14 -- common/autotest_common.sh@10 -- # set +x 00:05:44.857 ************************************ 00:05:44.857 START TEST spdkcli_tcp 00:05:44.857 ************************************ 00:05:44.857 11:29:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:45.115 * Looking for test storage... 00:05:45.115 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:45.115 11:29:14 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:45.115 11:29:14 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:45.115 11:29:14 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:45.115 11:29:14 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:45.115 11:29:14 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:45.115 11:29:14 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:45.115 11:29:14 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:45.115 11:29:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:45.115 11:29:14 -- common/autotest_common.sh@10 -- # set +x 00:05:45.115 11:29:14 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2173705 00:05:45.115 11:29:14 -- spdkcli/tcp.sh@27 -- # waitforlisten 2173705 00:05:45.115 11:29:14 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:45.115 11:29:14 -- common/autotest_common.sh@819 -- # '[' -z 2173705 ']' 00:05:45.115 11:29:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.115 11:29:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:45.115 11:29:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.115 11:29:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:45.115 11:29:14 -- common/autotest_common.sh@10 -- # set +x 00:05:45.115 [2024-07-21 11:29:14.377736] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:45.115 [2024-07-21 11:29:14.377793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2173705 ] 00:05:45.115 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.115 [2024-07-21 11:29:14.462841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.115 [2024-07-21 11:29:14.500842] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:45.115 [2024-07-21 11:29:14.500981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.115 [2024-07-21 11:29:14.500984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.047 11:29:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:46.047 11:29:15 -- common/autotest_common.sh@852 -- # return 0 00:05:46.047 11:29:15 -- spdkcli/tcp.sh@31 -- # socat_pid=2173866 00:05:46.047 11:29:15 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:46.047 11:29:15 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:46.047 [ 00:05:46.047 "bdev_malloc_delete", 00:05:46.048 "bdev_malloc_create", 00:05:46.048 "bdev_null_resize", 00:05:46.048 "bdev_null_delete", 00:05:46.048 "bdev_null_create", 00:05:46.048 "bdev_nvme_cuse_unregister", 00:05:46.048 "bdev_nvme_cuse_register", 00:05:46.048 "bdev_opal_new_user", 00:05:46.048 "bdev_opal_set_lock_state", 00:05:46.048 "bdev_opal_delete", 00:05:46.048 "bdev_opal_get_info", 00:05:46.048 "bdev_opal_create", 00:05:46.048 "bdev_nvme_opal_revert", 00:05:46.048 "bdev_nvme_opal_init", 00:05:46.048 "bdev_nvme_send_cmd", 00:05:46.048 "bdev_nvme_get_path_iostat", 00:05:46.048 "bdev_nvme_get_mdns_discovery_info", 00:05:46.048 "bdev_nvme_stop_mdns_discovery", 00:05:46.048 "bdev_nvme_start_mdns_discovery", 00:05:46.048 "bdev_nvme_set_multipath_policy", 00:05:46.048 "bdev_nvme_set_preferred_path", 00:05:46.048 "bdev_nvme_get_io_paths", 00:05:46.048 "bdev_nvme_remove_error_injection", 00:05:46.048 "bdev_nvme_add_error_injection", 00:05:46.048 "bdev_nvme_get_discovery_info", 00:05:46.048 "bdev_nvme_stop_discovery", 00:05:46.048 "bdev_nvme_start_discovery", 00:05:46.048 "bdev_nvme_get_controller_health_info", 00:05:46.048 "bdev_nvme_disable_controller", 00:05:46.048 "bdev_nvme_enable_controller", 00:05:46.048 "bdev_nvme_reset_controller", 00:05:46.048 "bdev_nvme_get_transport_statistics", 00:05:46.048 "bdev_nvme_apply_firmware", 00:05:46.048 "bdev_nvme_detach_controller", 00:05:46.048 "bdev_nvme_get_controllers", 00:05:46.048 "bdev_nvme_attach_controller", 00:05:46.048 "bdev_nvme_set_hotplug", 00:05:46.048 "bdev_nvme_set_options", 00:05:46.048 "bdev_passthru_delete", 00:05:46.048 "bdev_passthru_create", 00:05:46.048 "bdev_lvol_grow_lvstore", 00:05:46.048 "bdev_lvol_get_lvols", 00:05:46.048 "bdev_lvol_get_lvstores", 00:05:46.048 "bdev_lvol_delete", 00:05:46.048 "bdev_lvol_set_read_only", 00:05:46.048 "bdev_lvol_resize", 00:05:46.048 "bdev_lvol_decouple_parent", 00:05:46.048 "bdev_lvol_inflate", 00:05:46.048 "bdev_lvol_rename", 00:05:46.048 "bdev_lvol_clone_bdev", 00:05:46.048 "bdev_lvol_clone", 00:05:46.048 "bdev_lvol_snapshot", 00:05:46.048 "bdev_lvol_create", 00:05:46.048 "bdev_lvol_delete_lvstore", 00:05:46.048 "bdev_lvol_rename_lvstore", 00:05:46.048 "bdev_lvol_create_lvstore", 00:05:46.048 "bdev_raid_set_options", 00:05:46.048 "bdev_raid_remove_base_bdev", 00:05:46.048 "bdev_raid_add_base_bdev", 00:05:46.048 "bdev_raid_delete", 00:05:46.048 "bdev_raid_create", 00:05:46.048 "bdev_raid_get_bdevs", 00:05:46.048 "bdev_error_inject_error", 00:05:46.048 "bdev_error_delete", 00:05:46.048 "bdev_error_create", 00:05:46.048 "bdev_split_delete", 00:05:46.048 "bdev_split_create", 00:05:46.048 "bdev_delay_delete", 00:05:46.048 "bdev_delay_create", 00:05:46.048 "bdev_delay_update_latency", 00:05:46.048 "bdev_zone_block_delete", 00:05:46.048 "bdev_zone_block_create", 00:05:46.048 "blobfs_create", 00:05:46.048 "blobfs_detect", 00:05:46.048 "blobfs_set_cache_size", 00:05:46.048 "bdev_aio_delete", 00:05:46.048 "bdev_aio_rescan", 00:05:46.048 "bdev_aio_create", 00:05:46.048 "bdev_ftl_set_property", 00:05:46.048 "bdev_ftl_get_properties", 00:05:46.048 "bdev_ftl_get_stats", 00:05:46.048 "bdev_ftl_unmap", 00:05:46.048 "bdev_ftl_unload", 00:05:46.048 "bdev_ftl_delete", 00:05:46.048 "bdev_ftl_load", 00:05:46.048 "bdev_ftl_create", 00:05:46.048 "bdev_virtio_attach_controller", 00:05:46.048 "bdev_virtio_scsi_get_devices", 00:05:46.048 "bdev_virtio_detach_controller", 00:05:46.048 "bdev_virtio_blk_set_hotplug", 00:05:46.048 "bdev_iscsi_delete", 00:05:46.048 "bdev_iscsi_create", 00:05:46.048 "bdev_iscsi_set_options", 00:05:46.048 "accel_error_inject_error", 00:05:46.048 "ioat_scan_accel_module", 00:05:46.048 "dsa_scan_accel_module", 00:05:46.048 "iaa_scan_accel_module", 00:05:46.048 "iscsi_set_options", 00:05:46.048 "iscsi_get_auth_groups", 00:05:46.048 "iscsi_auth_group_remove_secret", 00:05:46.048 "iscsi_auth_group_add_secret", 00:05:46.048 "iscsi_delete_auth_group", 00:05:46.048 "iscsi_create_auth_group", 00:05:46.048 "iscsi_set_discovery_auth", 00:05:46.048 "iscsi_get_options", 00:05:46.048 "iscsi_target_node_request_logout", 00:05:46.048 "iscsi_target_node_set_redirect", 00:05:46.048 "iscsi_target_node_set_auth", 00:05:46.048 "iscsi_target_node_add_lun", 00:05:46.048 "iscsi_get_connections", 00:05:46.048 "iscsi_portal_group_set_auth", 00:05:46.048 "iscsi_start_portal_group", 00:05:46.048 "iscsi_delete_portal_group", 00:05:46.048 "iscsi_create_portal_group", 00:05:46.048 "iscsi_get_portal_groups", 00:05:46.048 "iscsi_delete_target_node", 00:05:46.048 "iscsi_target_node_remove_pg_ig_maps", 00:05:46.048 "iscsi_target_node_add_pg_ig_maps", 00:05:46.048 "iscsi_create_target_node", 00:05:46.048 "iscsi_get_target_nodes", 00:05:46.048 "iscsi_delete_initiator_group", 00:05:46.048 "iscsi_initiator_group_remove_initiators", 00:05:46.048 "iscsi_initiator_group_add_initiators", 00:05:46.048 "iscsi_create_initiator_group", 00:05:46.048 "iscsi_get_initiator_groups", 00:05:46.048 "nvmf_set_crdt", 00:05:46.048 "nvmf_set_config", 00:05:46.048 "nvmf_set_max_subsystems", 00:05:46.048 "nvmf_subsystem_get_listeners", 00:05:46.048 "nvmf_subsystem_get_qpairs", 00:05:46.048 "nvmf_subsystem_get_controllers", 00:05:46.048 "nvmf_get_stats", 00:05:46.048 "nvmf_get_transports", 00:05:46.048 "nvmf_create_transport", 00:05:46.048 "nvmf_get_targets", 00:05:46.048 "nvmf_delete_target", 00:05:46.048 "nvmf_create_target", 00:05:46.048 "nvmf_subsystem_allow_any_host", 00:05:46.048 "nvmf_subsystem_remove_host", 00:05:46.048 "nvmf_subsystem_add_host", 00:05:46.048 "nvmf_subsystem_remove_ns", 00:05:46.048 "nvmf_subsystem_add_ns", 00:05:46.048 "nvmf_subsystem_listener_set_ana_state", 00:05:46.048 "nvmf_discovery_get_referrals", 00:05:46.048 "nvmf_discovery_remove_referral", 00:05:46.048 "nvmf_discovery_add_referral", 00:05:46.048 "nvmf_subsystem_remove_listener", 00:05:46.048 "nvmf_subsystem_add_listener", 00:05:46.048 "nvmf_delete_subsystem", 00:05:46.048 "nvmf_create_subsystem", 00:05:46.048 "nvmf_get_subsystems", 00:05:46.048 "env_dpdk_get_mem_stats", 00:05:46.048 "nbd_get_disks", 00:05:46.048 "nbd_stop_disk", 00:05:46.048 "nbd_start_disk", 00:05:46.048 "ublk_recover_disk", 00:05:46.048 "ublk_get_disks", 00:05:46.048 "ublk_stop_disk", 00:05:46.048 "ublk_start_disk", 00:05:46.048 "ublk_destroy_target", 00:05:46.048 "ublk_create_target", 00:05:46.048 "virtio_blk_create_transport", 00:05:46.048 "virtio_blk_get_transports", 00:05:46.048 "vhost_controller_set_coalescing", 00:05:46.048 "vhost_get_controllers", 00:05:46.048 "vhost_delete_controller", 00:05:46.048 "vhost_create_blk_controller", 00:05:46.048 "vhost_scsi_controller_remove_target", 00:05:46.048 "vhost_scsi_controller_add_target", 00:05:46.048 "vhost_start_scsi_controller", 00:05:46.048 "vhost_create_scsi_controller", 00:05:46.048 "thread_set_cpumask", 00:05:46.048 "framework_get_scheduler", 00:05:46.048 "framework_set_scheduler", 00:05:46.048 "framework_get_reactors", 00:05:46.048 "thread_get_io_channels", 00:05:46.048 "thread_get_pollers", 00:05:46.048 "thread_get_stats", 00:05:46.048 "framework_monitor_context_switch", 00:05:46.048 "spdk_kill_instance", 00:05:46.048 "log_enable_timestamps", 00:05:46.048 "log_get_flags", 00:05:46.048 "log_clear_flag", 00:05:46.048 "log_set_flag", 00:05:46.048 "log_get_level", 00:05:46.048 "log_set_level", 00:05:46.048 "log_get_print_level", 00:05:46.048 "log_set_print_level", 00:05:46.048 "framework_enable_cpumask_locks", 00:05:46.048 "framework_disable_cpumask_locks", 00:05:46.048 "framework_wait_init", 00:05:46.048 "framework_start_init", 00:05:46.048 "scsi_get_devices", 00:05:46.048 "bdev_get_histogram", 00:05:46.048 "bdev_enable_histogram", 00:05:46.048 "bdev_set_qos_limit", 00:05:46.048 "bdev_set_qd_sampling_period", 00:05:46.048 "bdev_get_bdevs", 00:05:46.048 "bdev_reset_iostat", 00:05:46.048 "bdev_get_iostat", 00:05:46.048 "bdev_examine", 00:05:46.048 "bdev_wait_for_examine", 00:05:46.048 "bdev_set_options", 00:05:46.048 "notify_get_notifications", 00:05:46.048 "notify_get_types", 00:05:46.048 "accel_get_stats", 00:05:46.048 "accel_set_options", 00:05:46.048 "accel_set_driver", 00:05:46.048 "accel_crypto_key_destroy", 00:05:46.048 "accel_crypto_keys_get", 00:05:46.048 "accel_crypto_key_create", 00:05:46.048 "accel_assign_opc", 00:05:46.048 "accel_get_module_info", 00:05:46.048 "accel_get_opc_assignments", 00:05:46.048 "vmd_rescan", 00:05:46.048 "vmd_remove_device", 00:05:46.048 "vmd_enable", 00:05:46.048 "sock_set_default_impl", 00:05:46.048 "sock_impl_set_options", 00:05:46.048 "sock_impl_get_options", 00:05:46.048 "iobuf_get_stats", 00:05:46.048 "iobuf_set_options", 00:05:46.048 "framework_get_pci_devices", 00:05:46.048 "framework_get_config", 00:05:46.048 "framework_get_subsystems", 00:05:46.048 "trace_get_info", 00:05:46.048 "trace_get_tpoint_group_mask", 00:05:46.048 "trace_disable_tpoint_group", 00:05:46.048 "trace_enable_tpoint_group", 00:05:46.048 "trace_clear_tpoint_mask", 00:05:46.048 "trace_set_tpoint_mask", 00:05:46.048 "spdk_get_version", 00:05:46.048 "rpc_get_methods" 00:05:46.048 ] 00:05:46.048 11:29:15 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:46.048 11:29:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:46.048 11:29:15 -- common/autotest_common.sh@10 -- # set +x 00:05:46.048 11:29:15 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:46.048 11:29:15 -- spdkcli/tcp.sh@38 -- # killprocess 2173705 00:05:46.048 11:29:15 -- common/autotest_common.sh@926 -- # '[' -z 2173705 ']' 00:05:46.048 11:29:15 -- common/autotest_common.sh@930 -- # kill -0 2173705 00:05:46.048 11:29:15 -- common/autotest_common.sh@931 -- # uname 00:05:46.049 11:29:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:46.049 11:29:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2173705 00:05:46.049 11:29:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:46.049 11:29:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:46.049 11:29:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2173705' 00:05:46.049 killing process with pid 2173705 00:05:46.049 11:29:15 -- common/autotest_common.sh@945 -- # kill 2173705 00:05:46.049 11:29:15 -- common/autotest_common.sh@950 -- # wait 2173705 00:05:46.615 00:05:46.615 real 0m1.530s 00:05:46.615 user 0m2.826s 00:05:46.615 sys 0m0.501s 00:05:46.615 11:29:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.615 11:29:15 -- common/autotest_common.sh@10 -- # set +x 00:05:46.615 ************************************ 00:05:46.615 END TEST spdkcli_tcp 00:05:46.615 ************************************ 00:05:46.615 11:29:15 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.615 11:29:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.615 11:29:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.615 11:29:15 -- common/autotest_common.sh@10 -- # set +x 00:05:46.615 ************************************ 00:05:46.615 START TEST dpdk_mem_utility 00:05:46.615 ************************************ 00:05:46.615 11:29:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.615 * Looking for test storage... 00:05:46.615 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:46.615 11:29:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:46.615 11:29:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2174039 00:05:46.615 11:29:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2174039 00:05:46.615 11:29:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.615 11:29:15 -- common/autotest_common.sh@819 -- # '[' -z 2174039 ']' 00:05:46.615 11:29:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.615 11:29:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.615 11:29:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.615 11:29:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.615 11:29:15 -- common/autotest_common.sh@10 -- # set +x 00:05:46.615 [2024-07-21 11:29:15.959134] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:46.615 [2024-07-21 11:29:15.959194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174039 ] 00:05:46.615 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.908 [2024-07-21 11:29:16.044943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.908 [2024-07-21 11:29:16.082955] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.908 [2024-07-21 11:29:16.083084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.473 11:29:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.473 11:29:16 -- common/autotest_common.sh@852 -- # return 0 00:05:47.473 11:29:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:47.473 11:29:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:47.473 11:29:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.473 11:29:16 -- common/autotest_common.sh@10 -- # set +x 00:05:47.473 { 00:05:47.473 "filename": "/tmp/spdk_mem_dump.txt" 00:05:47.473 } 00:05:47.473 11:29:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.473 11:29:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:47.473 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:47.473 1 heaps totaling size 814.000000 MiB 00:05:47.473 size: 814.000000 MiB heap id: 0 00:05:47.473 end heaps---------- 00:05:47.473 8 mempools totaling size 598.116089 MiB 00:05:47.473 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:47.473 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:47.473 size: 84.521057 MiB name: bdev_io_2174039 00:05:47.473 size: 51.011292 MiB name: evtpool_2174039 00:05:47.473 size: 50.003479 MiB name: msgpool_2174039 00:05:47.473 size: 21.763794 MiB name: PDU_Pool 00:05:47.473 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:47.473 size: 0.026123 MiB name: Session_Pool 00:05:47.473 end mempools------- 00:05:47.473 6 memzones totaling size 4.142822 MiB 00:05:47.473 size: 1.000366 MiB name: RG_ring_0_2174039 00:05:47.473 size: 1.000366 MiB name: RG_ring_1_2174039 00:05:47.473 size: 1.000366 MiB name: RG_ring_4_2174039 00:05:47.473 size: 1.000366 MiB name: RG_ring_5_2174039 00:05:47.473 size: 0.125366 MiB name: RG_ring_2_2174039 00:05:47.473 size: 0.015991 MiB name: RG_ring_3_2174039 00:05:47.473 end memzones------- 00:05:47.473 11:29:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:47.473 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:47.473 list of free elements. size: 12.519348 MiB 00:05:47.473 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:47.473 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:47.473 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:47.473 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:47.473 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:47.473 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:47.473 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:47.473 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:47.473 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:47.473 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:47.473 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:47.473 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:47.473 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:47.473 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:47.473 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:47.473 list of standard malloc elements. size: 199.218079 MiB 00:05:47.473 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:47.473 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:47.473 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:47.473 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:47.473 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:47.473 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:47.473 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:47.473 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:47.473 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:47.473 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:47.473 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:47.473 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:47.473 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:47.473 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:47.473 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:47.473 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:47.473 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:47.473 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:47.473 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:47.473 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:47.473 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:47.473 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:47.473 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:47.473 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:47.473 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:47.473 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:47.473 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:47.473 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:47.473 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:47.473 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:47.473 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:47.473 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:47.473 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:47.473 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:47.473 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:47.473 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:47.473 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:47.473 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:47.473 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:47.473 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:47.473 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:47.473 list of memzone associated elements. size: 602.262573 MiB 00:05:47.473 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:47.473 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:47.473 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:47.473 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:47.473 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:47.473 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2174039_0 00:05:47.473 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:47.473 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2174039_0 00:05:47.473 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:47.473 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2174039_0 00:05:47.473 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:47.473 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:47.473 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:47.473 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:47.473 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:47.473 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2174039 00:05:47.473 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:47.473 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2174039 00:05:47.473 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:47.473 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2174039 00:05:47.473 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:47.473 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:47.473 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:47.473 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:47.473 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:47.473 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:47.473 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:47.473 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:47.473 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:47.473 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2174039 00:05:47.473 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:47.473 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2174039 00:05:47.473 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:47.473 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2174039 00:05:47.473 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:47.473 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2174039 00:05:47.473 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:47.473 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2174039 00:05:47.473 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:47.473 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:47.473 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:47.473 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:47.473 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:47.473 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:47.473 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:47.473 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2174039 00:05:47.473 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:47.473 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:47.473 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:47.473 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:47.473 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:47.473 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2174039 00:05:47.473 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:47.473 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:47.473 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:47.474 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2174039 00:05:47.474 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:47.474 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2174039 00:05:47.474 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:47.474 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:47.474 11:29:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:47.474 11:29:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2174039 00:05:47.474 11:29:16 -- common/autotest_common.sh@926 -- # '[' -z 2174039 ']' 00:05:47.474 11:29:16 -- common/autotest_common.sh@930 -- # kill -0 2174039 00:05:47.474 11:29:16 -- common/autotest_common.sh@931 -- # uname 00:05:47.474 11:29:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:47.474 11:29:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2174039 00:05:47.731 11:29:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:47.731 11:29:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:47.731 11:29:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2174039' 00:05:47.731 killing process with pid 2174039 00:05:47.731 11:29:16 -- common/autotest_common.sh@945 -- # kill 2174039 00:05:47.731 11:29:16 -- common/autotest_common.sh@950 -- # wait 2174039 00:05:47.988 00:05:47.988 real 0m1.390s 00:05:47.988 user 0m1.401s 00:05:47.988 sys 0m0.465s 00:05:47.988 11:29:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.988 11:29:17 -- common/autotest_common.sh@10 -- # set +x 00:05:47.988 ************************************ 00:05:47.988 END TEST dpdk_mem_utility 00:05:47.988 ************************************ 00:05:47.988 11:29:17 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:47.988 11:29:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.988 11:29:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.988 11:29:17 -- common/autotest_common.sh@10 -- # set +x 00:05:47.988 ************************************ 00:05:47.988 START TEST event 00:05:47.988 ************************************ 00:05:47.988 11:29:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:47.988 * Looking for test storage... 00:05:47.988 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:47.988 11:29:17 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:47.988 11:29:17 -- bdev/nbd_common.sh@6 -- # set -e 00:05:47.988 11:29:17 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:47.988 11:29:17 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:47.988 11:29:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.988 11:29:17 -- common/autotest_common.sh@10 -- # set +x 00:05:47.988 ************************************ 00:05:47.988 START TEST event_perf 00:05:47.988 ************************************ 00:05:47.988 11:29:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:47.988 Running I/O for 1 seconds...[2024-07-21 11:29:17.362915] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:47.988 [2024-07-21 11:29:17.362994] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174363 ] 00:05:47.988 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.246 [2024-07-21 11:29:17.451185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.246 [2024-07-21 11:29:17.489914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.246 [2024-07-21 11:29:17.490015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.246 [2024-07-21 11:29:17.490104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.246 [2024-07-21 11:29:17.490116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.179 Running I/O for 1 seconds... 00:05:49.179 lcore 0: 208826 00:05:49.179 lcore 1: 208824 00:05:49.179 lcore 2: 208825 00:05:49.179 lcore 3: 208826 00:05:49.179 done. 00:05:49.179 00:05:49.179 real 0m1.211s 00:05:49.179 user 0m4.101s 00:05:49.179 sys 0m0.105s 00:05:49.179 11:29:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.179 11:29:18 -- common/autotest_common.sh@10 -- # set +x 00:05:49.179 ************************************ 00:05:49.179 END TEST event_perf 00:05:49.179 ************************************ 00:05:49.179 11:29:18 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:49.179 11:29:18 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:49.179 11:29:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.179 11:29:18 -- common/autotest_common.sh@10 -- # set +x 00:05:49.179 ************************************ 00:05:49.179 START TEST event_reactor 00:05:49.179 ************************************ 00:05:49.179 11:29:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:49.437 [2024-07-21 11:29:18.623260] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:49.437 [2024-07-21 11:29:18.623352] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174652 ] 00:05:49.437 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.437 [2024-07-21 11:29:18.709851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.437 [2024-07-21 11:29:18.747659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.810 test_start 00:05:50.810 oneshot 00:05:50.810 tick 100 00:05:50.810 tick 100 00:05:50.810 tick 250 00:05:50.810 tick 100 00:05:50.810 tick 100 00:05:50.810 tick 100 00:05:50.810 tick 250 00:05:50.810 tick 500 00:05:50.810 tick 100 00:05:50.810 tick 100 00:05:50.810 tick 250 00:05:50.810 tick 100 00:05:50.810 tick 100 00:05:50.810 test_end 00:05:50.810 00:05:50.810 real 0m1.208s 00:05:50.810 user 0m1.103s 00:05:50.810 sys 0m0.100s 00:05:50.810 11:29:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.810 11:29:19 -- common/autotest_common.sh@10 -- # set +x 00:05:50.810 ************************************ 00:05:50.810 END TEST event_reactor 00:05:50.810 ************************************ 00:05:50.810 11:29:19 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.810 11:29:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:50.810 11:29:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.810 11:29:19 -- common/autotest_common.sh@10 -- # set +x 00:05:50.810 ************************************ 00:05:50.810 START TEST event_reactor_perf 00:05:50.810 ************************************ 00:05:50.810 11:29:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.810 [2024-07-21 11:29:19.879165] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:50.810 [2024-07-21 11:29:19.879256] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174865 ] 00:05:50.810 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.810 [2024-07-21 11:29:19.963827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.810 [2024-07-21 11:29:20.001482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.743 test_start 00:05:51.743 test_end 00:05:51.743 Performance: 508716 events per second 00:05:51.743 00:05:51.743 real 0m1.207s 00:05:51.743 user 0m1.105s 00:05:51.743 sys 0m0.098s 00:05:51.743 11:29:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.743 11:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:51.743 ************************************ 00:05:51.743 END TEST event_reactor_perf 00:05:51.743 ************************************ 00:05:51.743 11:29:21 -- event/event.sh@49 -- # uname -s 00:05:51.743 11:29:21 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:51.743 11:29:21 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:51.743 11:29:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.743 11:29:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.743 11:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:51.743 ************************************ 00:05:51.743 START TEST event_scheduler 00:05:51.743 ************************************ 00:05:51.743 11:29:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:52.002 * Looking for test storage... 00:05:52.002 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:52.002 11:29:21 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:52.002 11:29:21 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2175105 00:05:52.002 11:29:21 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.002 11:29:21 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:52.002 11:29:21 -- scheduler/scheduler.sh@37 -- # waitforlisten 2175105 00:05:52.002 11:29:21 -- common/autotest_common.sh@819 -- # '[' -z 2175105 ']' 00:05:52.002 11:29:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.002 11:29:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.002 11:29:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.002 11:29:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.002 11:29:21 -- common/autotest_common.sh@10 -- # set +x 00:05:52.002 [2024-07-21 11:29:21.256769] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:52.002 [2024-07-21 11:29:21.256829] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2175105 ] 00:05:52.002 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.002 [2024-07-21 11:29:21.338321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.002 [2024-07-21 11:29:21.377443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.002 [2024-07-21 11:29:21.377526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.002 [2024-07-21 11:29:21.377541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.002 [2024-07-21 11:29:21.377543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.933 11:29:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.933 11:29:22 -- common/autotest_common.sh@852 -- # return 0 00:05:52.933 11:29:22 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:52.933 11:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.933 11:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.933 POWER: Env isn't set yet! 00:05:52.933 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:52.933 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:52.933 POWER: Cannot set governor of lcore 0 to userspace 00:05:52.933 POWER: Attempting to initialise PSTAT power management... 00:05:52.933 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:52.933 POWER: Initialized successfully for lcore 0 power management 00:05:52.933 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:52.933 POWER: Initialized successfully for lcore 1 power management 00:05:52.933 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:52.933 POWER: Initialized successfully for lcore 2 power management 00:05:52.933 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:52.933 POWER: Initialized successfully for lcore 3 power management 00:05:52.933 [2024-07-21 11:29:22.096867] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:52.933 [2024-07-21 11:29:22.096884] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:52.933 [2024-07-21 11:29:22.096894] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:52.933 11:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.933 11:29:22 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:52.933 11:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.933 11:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.933 [2024-07-21 11:29:22.160642] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:52.933 11:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.933 11:29:22 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:52.933 11:29:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.933 11:29:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.933 11:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.933 ************************************ 00:05:52.933 START TEST scheduler_create_thread 00:05:52.933 ************************************ 00:05:52.933 11:29:22 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:52.933 11:29:22 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:52.933 11:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.933 11:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.933 2 00:05:52.933 11:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.933 11:29:22 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:52.933 11:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.933 11:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.933 3 00:05:52.933 11:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.933 11:29:22 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:52.933 11:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.933 11:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.933 4 00:05:52.933 11:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.933 11:29:22 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:52.933 11:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.933 11:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.933 5 00:05:52.933 11:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.933 11:29:22 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:52.933 11:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.933 11:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.933 6 00:05:52.933 11:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.933 11:29:22 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:52.933 11:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.933 11:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.933 7 00:05:52.933 11:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.933 11:29:22 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:52.933 11:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.933 11:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.933 8 00:05:52.933 11:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.933 11:29:22 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:52.933 11:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.933 11:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.933 9 00:05:52.933 11:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.933 11:29:22 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:52.933 11:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.933 11:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.933 10 00:05:52.933 11:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.934 11:29:22 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:52.934 11:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.934 11:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.934 11:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.934 11:29:22 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:52.934 11:29:22 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:52.934 11:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.934 11:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:53.862 11:29:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:53.862 11:29:23 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:53.862 11:29:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:53.862 11:29:23 -- common/autotest_common.sh@10 -- # set +x 00:05:55.228 11:29:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.228 11:29:24 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:55.228 11:29:24 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:55.228 11:29:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.228 11:29:24 -- common/autotest_common.sh@10 -- # set +x 00:05:56.158 11:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:56.158 00:05:56.158 real 0m3.379s 00:05:56.158 user 0m0.020s 00:05:56.158 sys 0m0.009s 00:05:56.158 11:29:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.158 11:29:25 -- common/autotest_common.sh@10 -- # set +x 00:05:56.158 ************************************ 00:05:56.159 END TEST scheduler_create_thread 00:05:56.159 ************************************ 00:05:56.415 11:29:25 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:56.415 11:29:25 -- scheduler/scheduler.sh@46 -- # killprocess 2175105 00:05:56.415 11:29:25 -- common/autotest_common.sh@926 -- # '[' -z 2175105 ']' 00:05:56.415 11:29:25 -- common/autotest_common.sh@930 -- # kill -0 2175105 00:05:56.415 11:29:25 -- common/autotest_common.sh@931 -- # uname 00:05:56.415 11:29:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:56.415 11:29:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2175105 00:05:56.416 11:29:25 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:56.416 11:29:25 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:56.416 11:29:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2175105' 00:05:56.416 killing process with pid 2175105 00:05:56.416 11:29:25 -- common/autotest_common.sh@945 -- # kill 2175105 00:05:56.416 11:29:25 -- common/autotest_common.sh@950 -- # wait 2175105 00:05:56.673 [2024-07-21 11:29:25.928621] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:56.673 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:56.673 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:56.673 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:56.673 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:56.673 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:56.673 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:56.673 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:56.673 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:56.930 00:05:56.930 real 0m5.035s 00:05:56.930 user 0m10.359s 00:05:56.930 sys 0m0.424s 00:05:56.930 11:29:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.930 11:29:26 -- common/autotest_common.sh@10 -- # set +x 00:05:56.930 ************************************ 00:05:56.930 END TEST event_scheduler 00:05:56.930 ************************************ 00:05:56.930 11:29:26 -- event/event.sh@51 -- # modprobe -n nbd 00:05:56.930 11:29:26 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:56.930 11:29:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.930 11:29:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.930 11:29:26 -- common/autotest_common.sh@10 -- # set +x 00:05:56.930 ************************************ 00:05:56.930 START TEST app_repeat 00:05:56.930 ************************************ 00:05:56.930 11:29:26 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:56.930 11:29:26 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.930 11:29:26 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.930 11:29:26 -- event/event.sh@13 -- # local nbd_list 00:05:56.930 11:29:26 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.930 11:29:26 -- event/event.sh@14 -- # local bdev_list 00:05:56.930 11:29:26 -- event/event.sh@15 -- # local repeat_times=4 00:05:56.930 11:29:26 -- event/event.sh@17 -- # modprobe nbd 00:05:56.930 11:29:26 -- event/event.sh@19 -- # repeat_pid=2176116 00:05:56.930 11:29:26 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.930 11:29:26 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:56.930 11:29:26 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2176116' 00:05:56.930 Process app_repeat pid: 2176116 00:05:56.930 11:29:26 -- event/event.sh@23 -- # for i in {0..2} 00:05:56.930 11:29:26 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:56.930 spdk_app_start Round 0 00:05:56.930 11:29:26 -- event/event.sh@25 -- # waitforlisten 2176116 /var/tmp/spdk-nbd.sock 00:05:56.930 11:29:26 -- common/autotest_common.sh@819 -- # '[' -z 2176116 ']' 00:05:56.930 11:29:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.930 11:29:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:56.930 11:29:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.930 11:29:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:56.930 11:29:26 -- common/autotest_common.sh@10 -- # set +x 00:05:56.930 [2024-07-21 11:29:26.240465] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:56.930 [2024-07-21 11:29:26.240533] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2176116 ] 00:05:56.930 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.930 [2024-07-21 11:29:26.323998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.187 [2024-07-21 11:29:26.362901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.187 [2024-07-21 11:29:26.362904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.751 11:29:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:57.751 11:29:27 -- common/autotest_common.sh@852 -- # return 0 00:05:57.751 11:29:27 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.009 Malloc0 00:05:58.009 11:29:27 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.009 Malloc1 00:05:58.009 11:29:27 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.009 11:29:27 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.009 11:29:27 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.009 11:29:27 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.009 11:29:27 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.009 11:29:27 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.009 11:29:27 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.009 11:29:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.009 11:29:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.009 11:29:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.009 11:29:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.009 11:29:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.009 11:29:27 -- bdev/nbd_common.sh@12 -- # local i 00:05:58.009 11:29:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.009 11:29:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.009 11:29:27 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.266 /dev/nbd0 00:05:58.266 11:29:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.266 11:29:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.266 11:29:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:58.266 11:29:27 -- common/autotest_common.sh@857 -- # local i 00:05:58.266 11:29:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:58.266 11:29:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:58.266 11:29:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:58.266 11:29:27 -- common/autotest_common.sh@861 -- # break 00:05:58.266 11:29:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:58.266 11:29:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:58.266 11:29:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.266 1+0 records in 00:05:58.266 1+0 records out 00:05:58.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199579 s, 20.5 MB/s 00:05:58.266 11:29:27 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:58.266 11:29:27 -- common/autotest_common.sh@874 -- # size=4096 00:05:58.266 11:29:27 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:58.266 11:29:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:58.266 11:29:27 -- common/autotest_common.sh@877 -- # return 0 00:05:58.266 11:29:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.266 11:29:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.266 11:29:27 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.522 /dev/nbd1 00:05:58.522 11:29:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.522 11:29:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.522 11:29:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:58.522 11:29:27 -- common/autotest_common.sh@857 -- # local i 00:05:58.522 11:29:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:58.522 11:29:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:58.522 11:29:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:58.522 11:29:27 -- common/autotest_common.sh@861 -- # break 00:05:58.522 11:29:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:58.522 11:29:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:58.522 11:29:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.522 1+0 records in 00:05:58.522 1+0 records out 00:05:58.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238243 s, 17.2 MB/s 00:05:58.522 11:29:27 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:58.522 11:29:27 -- common/autotest_common.sh@874 -- # size=4096 00:05:58.522 11:29:27 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:58.522 11:29:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:58.522 11:29:27 -- common/autotest_common.sh@877 -- # return 0 00:05:58.523 11:29:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.523 11:29:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.523 11:29:27 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.523 11:29:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.523 11:29:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.779 11:29:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.779 { 00:05:58.779 "nbd_device": "/dev/nbd0", 00:05:58.779 "bdev_name": "Malloc0" 00:05:58.779 }, 00:05:58.779 { 00:05:58.779 "nbd_device": "/dev/nbd1", 00:05:58.779 "bdev_name": "Malloc1" 00:05:58.779 } 00:05:58.779 ]' 00:05:58.779 11:29:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.779 { 00:05:58.779 "nbd_device": "/dev/nbd0", 00:05:58.779 "bdev_name": "Malloc0" 00:05:58.779 }, 00:05:58.779 { 00:05:58.779 "nbd_device": "/dev/nbd1", 00:05:58.779 "bdev_name": "Malloc1" 00:05:58.779 } 00:05:58.779 ]' 00:05:58.779 11:29:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.779 /dev/nbd1' 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.779 /dev/nbd1' 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.779 256+0 records in 00:05:58.779 256+0 records out 00:05:58.779 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110061 s, 95.3 MB/s 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.779 256+0 records in 00:05:58.779 256+0 records out 00:05:58.779 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019435 s, 54.0 MB/s 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.779 256+0 records in 00:05:58.779 256+0 records out 00:05:58.779 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180593 s, 58.1 MB/s 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@51 -- # local i 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.779 11:29:28 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.036 11:29:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.036 11:29:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.036 11:29:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.036 11:29:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.036 11:29:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.036 11:29:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.036 11:29:28 -- bdev/nbd_common.sh@41 -- # break 00:05:59.036 11:29:28 -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.036 11:29:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.036 11:29:28 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@41 -- # break 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@65 -- # true 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.292 11:29:28 -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.292 11:29:28 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.549 11:29:28 -- event/event.sh@35 -- # sleep 3 00:05:59.805 [2024-07-21 11:29:29.071481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.805 [2024-07-21 11:29:29.105481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.805 [2024-07-21 11:29:29.105484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.805 [2024-07-21 11:29:29.146738] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.805 [2024-07-21 11:29:29.146781] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.080 11:29:31 -- event/event.sh@23 -- # for i in {0..2} 00:06:03.080 11:29:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:03.080 spdk_app_start Round 1 00:06:03.080 11:29:31 -- event/event.sh@25 -- # waitforlisten 2176116 /var/tmp/spdk-nbd.sock 00:06:03.080 11:29:31 -- common/autotest_common.sh@819 -- # '[' -z 2176116 ']' 00:06:03.080 11:29:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.080 11:29:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:03.080 11:29:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.080 11:29:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:03.080 11:29:31 -- common/autotest_common.sh@10 -- # set +x 00:06:03.080 11:29:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:03.080 11:29:32 -- common/autotest_common.sh@852 -- # return 0 00:06:03.080 11:29:32 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.080 Malloc0 00:06:03.080 11:29:32 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.080 Malloc1 00:06:03.080 11:29:32 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.080 11:29:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.080 11:29:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.080 11:29:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.080 11:29:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.080 11:29:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.080 11:29:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.080 11:29:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.080 11:29:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.080 11:29:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.080 11:29:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.080 11:29:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.080 11:29:32 -- bdev/nbd_common.sh@12 -- # local i 00:06:03.080 11:29:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.080 11:29:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.080 11:29:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.339 /dev/nbd0 00:06:03.339 11:29:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.339 11:29:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.339 11:29:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:03.339 11:29:32 -- common/autotest_common.sh@857 -- # local i 00:06:03.339 11:29:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:03.339 11:29:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:03.339 11:29:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:03.339 11:29:32 -- common/autotest_common.sh@861 -- # break 00:06:03.339 11:29:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:03.339 11:29:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:03.339 11:29:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.339 1+0 records in 00:06:03.339 1+0 records out 00:06:03.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217714 s, 18.8 MB/s 00:06:03.339 11:29:32 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:03.339 11:29:32 -- common/autotest_common.sh@874 -- # size=4096 00:06:03.339 11:29:32 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:03.339 11:29:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:03.339 11:29:32 -- common/autotest_common.sh@877 -- # return 0 00:06:03.339 11:29:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.339 11:29:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.339 11:29:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.636 /dev/nbd1 00:06:03.636 11:29:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.636 11:29:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.636 11:29:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:03.636 11:29:32 -- common/autotest_common.sh@857 -- # local i 00:06:03.636 11:29:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:03.636 11:29:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:03.636 11:29:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:03.636 11:29:32 -- common/autotest_common.sh@861 -- # break 00:06:03.636 11:29:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:03.636 11:29:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:03.636 11:29:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.636 1+0 records in 00:06:03.636 1+0 records out 00:06:03.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265185 s, 15.4 MB/s 00:06:03.636 11:29:32 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:03.636 11:29:32 -- common/autotest_common.sh@874 -- # size=4096 00:06:03.636 11:29:32 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:03.636 11:29:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:03.636 11:29:32 -- common/autotest_common.sh@877 -- # return 0 00:06:03.636 11:29:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.636 11:29:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.636 11:29:32 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.636 11:29:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.636 11:29:32 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.636 11:29:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.636 { 00:06:03.636 "nbd_device": "/dev/nbd0", 00:06:03.636 "bdev_name": "Malloc0" 00:06:03.636 }, 00:06:03.636 { 00:06:03.636 "nbd_device": "/dev/nbd1", 00:06:03.636 "bdev_name": "Malloc1" 00:06:03.636 } 00:06:03.636 ]' 00:06:03.636 11:29:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.636 { 00:06:03.636 "nbd_device": "/dev/nbd0", 00:06:03.636 "bdev_name": "Malloc0" 00:06:03.636 }, 00:06:03.636 { 00:06:03.636 "nbd_device": "/dev/nbd1", 00:06:03.636 "bdev_name": "Malloc1" 00:06:03.636 } 00:06:03.636 ]' 00:06:03.636 11:29:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.636 11:29:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.636 /dev/nbd1' 00:06:03.636 11:29:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.636 11:29:33 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.636 /dev/nbd1' 00:06:03.636 11:29:33 -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.636 11:29:33 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.636 11:29:33 -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.636 11:29:33 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.636 11:29:33 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.636 11:29:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.636 11:29:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.636 11:29:33 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.636 11:29:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.636 11:29:33 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.636 11:29:33 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.906 256+0 records in 00:06:03.906 256+0 records out 00:06:03.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102159 s, 103 MB/s 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.906 256+0 records in 00:06:03.906 256+0 records out 00:06:03.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193047 s, 54.3 MB/s 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.906 256+0 records in 00:06:03.906 256+0 records out 00:06:03.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211642 s, 49.5 MB/s 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@51 -- # local i 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@41 -- # break 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.906 11:29:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.163 11:29:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.163 11:29:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.163 11:29:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.163 11:29:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.163 11:29:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.163 11:29:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.163 11:29:33 -- bdev/nbd_common.sh@41 -- # break 00:06:04.163 11:29:33 -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.163 11:29:33 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.163 11:29:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.163 11:29:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.419 11:29:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.419 11:29:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.419 11:29:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.419 11:29:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.419 11:29:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.419 11:29:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.419 11:29:33 -- bdev/nbd_common.sh@65 -- # true 00:06:04.419 11:29:33 -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.419 11:29:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.419 11:29:33 -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.419 11:29:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.419 11:29:33 -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.419 11:29:33 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.675 11:29:33 -- event/event.sh@35 -- # sleep 3 00:06:04.933 [2024-07-21 11:29:34.107733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.933 [2024-07-21 11:29:34.140291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.933 [2024-07-21 11:29:34.140294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.933 [2024-07-21 11:29:34.181132] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.933 [2024-07-21 11:29:34.181177] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.206 11:29:36 -- event/event.sh@23 -- # for i in {0..2} 00:06:08.206 11:29:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:08.206 spdk_app_start Round 2 00:06:08.206 11:29:36 -- event/event.sh@25 -- # waitforlisten 2176116 /var/tmp/spdk-nbd.sock 00:06:08.206 11:29:36 -- common/autotest_common.sh@819 -- # '[' -z 2176116 ']' 00:06:08.206 11:29:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.206 11:29:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:08.206 11:29:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.206 11:29:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:08.206 11:29:36 -- common/autotest_common.sh@10 -- # set +x 00:06:08.206 11:29:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.206 11:29:37 -- common/autotest_common.sh@852 -- # return 0 00:06:08.206 11:29:37 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.206 Malloc0 00:06:08.206 11:29:37 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.206 Malloc1 00:06:08.206 11:29:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.206 11:29:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.206 11:29:37 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.206 11:29:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.206 11:29:37 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.206 11:29:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.206 11:29:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.206 11:29:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.206 11:29:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.206 11:29:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.206 11:29:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.206 11:29:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.206 11:29:37 -- bdev/nbd_common.sh@12 -- # local i 00:06:08.206 11:29:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.206 11:29:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.206 11:29:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.206 /dev/nbd0 00:06:08.206 11:29:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.206 11:29:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.206 11:29:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:08.206 11:29:37 -- common/autotest_common.sh@857 -- # local i 00:06:08.206 11:29:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:08.206 11:29:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:08.206 11:29:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:08.462 11:29:37 -- common/autotest_common.sh@861 -- # break 00:06:08.462 11:29:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:08.462 11:29:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:08.462 11:29:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.462 1+0 records in 00:06:08.462 1+0 records out 00:06:08.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224992 s, 18.2 MB/s 00:06:08.462 11:29:37 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:08.462 11:29:37 -- common/autotest_common.sh@874 -- # size=4096 00:06:08.462 11:29:37 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:08.462 11:29:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:08.462 11:29:37 -- common/autotest_common.sh@877 -- # return 0 00:06:08.462 11:29:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.462 11:29:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.462 11:29:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.462 /dev/nbd1 00:06:08.462 11:29:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.462 11:29:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.462 11:29:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:08.462 11:29:37 -- common/autotest_common.sh@857 -- # local i 00:06:08.462 11:29:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:08.462 11:29:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:08.462 11:29:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:08.462 11:29:37 -- common/autotest_common.sh@861 -- # break 00:06:08.462 11:29:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:08.462 11:29:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:08.462 11:29:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.462 1+0 records in 00:06:08.462 1+0 records out 00:06:08.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234264 s, 17.5 MB/s 00:06:08.462 11:29:37 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:08.462 11:29:37 -- common/autotest_common.sh@874 -- # size=4096 00:06:08.462 11:29:37 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:08.462 11:29:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:08.462 11:29:37 -- common/autotest_common.sh@877 -- # return 0 00:06:08.462 11:29:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.462 11:29:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.462 11:29:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.462 11:29:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.462 11:29:37 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.718 { 00:06:08.718 "nbd_device": "/dev/nbd0", 00:06:08.718 "bdev_name": "Malloc0" 00:06:08.718 }, 00:06:08.718 { 00:06:08.718 "nbd_device": "/dev/nbd1", 00:06:08.718 "bdev_name": "Malloc1" 00:06:08.718 } 00:06:08.718 ]' 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.718 { 00:06:08.718 "nbd_device": "/dev/nbd0", 00:06:08.718 "bdev_name": "Malloc0" 00:06:08.718 }, 00:06:08.718 { 00:06:08.718 "nbd_device": "/dev/nbd1", 00:06:08.718 "bdev_name": "Malloc1" 00:06:08.718 } 00:06:08.718 ]' 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.718 /dev/nbd1' 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.718 /dev/nbd1' 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.718 256+0 records in 00:06:08.718 256+0 records out 00:06:08.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114715 s, 91.4 MB/s 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.718 256+0 records in 00:06:08.718 256+0 records out 00:06:08.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131481 s, 79.8 MB/s 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.718 256+0 records in 00:06:08.718 256+0 records out 00:06:08.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155852 s, 67.3 MB/s 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.718 11:29:38 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.976 11:29:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.976 11:29:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.976 11:29:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.976 11:29:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.976 11:29:38 -- bdev/nbd_common.sh@51 -- # local i 00:06:08.976 11:29:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.976 11:29:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.976 11:29:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.976 11:29:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.976 11:29:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.976 11:29:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.976 11:29:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.976 11:29:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.976 11:29:38 -- bdev/nbd_common.sh@41 -- # break 00:06:08.976 11:29:38 -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.976 11:29:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.976 11:29:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.233 11:29:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.233 11:29:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.233 11:29:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.233 11:29:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.233 11:29:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.233 11:29:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.233 11:29:38 -- bdev/nbd_common.sh@41 -- # break 00:06:09.233 11:29:38 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.233 11:29:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.233 11:29:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.233 11:29:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.490 11:29:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.490 11:29:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.490 11:29:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.490 11:29:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.490 11:29:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.490 11:29:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.490 11:29:38 -- bdev/nbd_common.sh@65 -- # true 00:06:09.490 11:29:38 -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.490 11:29:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.490 11:29:38 -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.490 11:29:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.490 11:29:38 -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.490 11:29:38 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.747 11:29:38 -- event/event.sh@35 -- # sleep 3 00:06:09.747 [2024-07-21 11:29:39.102956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.747 [2024-07-21 11:29:39.136024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.747 [2024-07-21 11:29:39.136026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.005 [2024-07-21 11:29:39.176964] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.005 [2024-07-21 11:29:39.177002] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.529 11:29:41 -- event/event.sh@38 -- # waitforlisten 2176116 /var/tmp/spdk-nbd.sock 00:06:12.529 11:29:41 -- common/autotest_common.sh@819 -- # '[' -z 2176116 ']' 00:06:12.529 11:29:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.529 11:29:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:12.529 11:29:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.529 11:29:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:12.529 11:29:41 -- common/autotest_common.sh@10 -- # set +x 00:06:12.786 11:29:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:12.786 11:29:42 -- common/autotest_common.sh@852 -- # return 0 00:06:12.786 11:29:42 -- event/event.sh@39 -- # killprocess 2176116 00:06:12.786 11:29:42 -- common/autotest_common.sh@926 -- # '[' -z 2176116 ']' 00:06:12.786 11:29:42 -- common/autotest_common.sh@930 -- # kill -0 2176116 00:06:12.786 11:29:42 -- common/autotest_common.sh@931 -- # uname 00:06:12.786 11:29:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:12.786 11:29:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2176116 00:06:12.786 11:29:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:12.786 11:29:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:12.786 11:29:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2176116' 00:06:12.786 killing process with pid 2176116 00:06:12.786 11:29:42 -- common/autotest_common.sh@945 -- # kill 2176116 00:06:12.786 11:29:42 -- common/autotest_common.sh@950 -- # wait 2176116 00:06:13.044 spdk_app_start is called in Round 0. 00:06:13.044 Shutdown signal received, stop current app iteration 00:06:13.044 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:13.044 spdk_app_start is called in Round 1. 00:06:13.044 Shutdown signal received, stop current app iteration 00:06:13.044 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:13.044 spdk_app_start is called in Round 2. 00:06:13.044 Shutdown signal received, stop current app iteration 00:06:13.044 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:13.044 spdk_app_start is called in Round 3. 00:06:13.044 Shutdown signal received, stop current app iteration 00:06:13.044 11:29:42 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:13.044 11:29:42 -- event/event.sh@42 -- # return 0 00:06:13.044 00:06:13.044 real 0m16.089s 00:06:13.044 user 0m34.231s 00:06:13.044 sys 0m2.989s 00:06:13.044 11:29:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.044 11:29:42 -- common/autotest_common.sh@10 -- # set +x 00:06:13.044 ************************************ 00:06:13.044 END TEST app_repeat 00:06:13.044 ************************************ 00:06:13.044 11:29:42 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:13.044 11:29:42 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:13.044 11:29:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.044 11:29:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.044 11:29:42 -- common/autotest_common.sh@10 -- # set +x 00:06:13.044 ************************************ 00:06:13.044 START TEST cpu_locks 00:06:13.044 ************************************ 00:06:13.044 11:29:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:13.044 * Looking for test storage... 00:06:13.044 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:13.044 11:29:42 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:13.044 11:29:42 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:13.044 11:29:42 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:13.044 11:29:42 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:13.044 11:29:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.044 11:29:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.044 11:29:42 -- common/autotest_common.sh@10 -- # set +x 00:06:13.044 ************************************ 00:06:13.044 START TEST default_locks 00:06:13.044 ************************************ 00:06:13.044 11:29:42 -- common/autotest_common.sh@1104 -- # default_locks 00:06:13.044 11:29:42 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2179689 00:06:13.044 11:29:42 -- event/cpu_locks.sh@47 -- # waitforlisten 2179689 00:06:13.044 11:29:42 -- common/autotest_common.sh@819 -- # '[' -z 2179689 ']' 00:06:13.044 11:29:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.044 11:29:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:13.044 11:29:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.044 11:29:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:13.044 11:29:42 -- common/autotest_common.sh@10 -- # set +x 00:06:13.044 11:29:42 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.301 [2024-07-21 11:29:42.501684] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:13.301 [2024-07-21 11:29:42.501744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179689 ] 00:06:13.301 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.301 [2024-07-21 11:29:42.589317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.301 [2024-07-21 11:29:42.626835] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:13.301 [2024-07-21 11:29:42.626952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.865 11:29:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:13.865 11:29:43 -- common/autotest_common.sh@852 -- # return 0 00:06:13.865 11:29:43 -- event/cpu_locks.sh@49 -- # locks_exist 2179689 00:06:13.865 11:29:43 -- event/cpu_locks.sh@22 -- # lslocks -p 2179689 00:06:13.865 11:29:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.430 lslocks: write error 00:06:14.430 11:29:43 -- event/cpu_locks.sh@50 -- # killprocess 2179689 00:06:14.430 11:29:43 -- common/autotest_common.sh@926 -- # '[' -z 2179689 ']' 00:06:14.430 11:29:43 -- common/autotest_common.sh@930 -- # kill -0 2179689 00:06:14.430 11:29:43 -- common/autotest_common.sh@931 -- # uname 00:06:14.430 11:29:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:14.430 11:29:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2179689 00:06:14.430 11:29:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:14.430 11:29:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:14.430 11:29:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2179689' 00:06:14.430 killing process with pid 2179689 00:06:14.430 11:29:43 -- common/autotest_common.sh@945 -- # kill 2179689 00:06:14.430 11:29:43 -- common/autotest_common.sh@950 -- # wait 2179689 00:06:14.687 11:29:44 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2179689 00:06:14.687 11:29:44 -- common/autotest_common.sh@640 -- # local es=0 00:06:14.687 11:29:44 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2179689 00:06:14.687 11:29:44 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:14.687 11:29:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:14.687 11:29:44 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:14.687 11:29:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:14.687 11:29:44 -- common/autotest_common.sh@643 -- # waitforlisten 2179689 00:06:14.687 11:29:44 -- common/autotest_common.sh@819 -- # '[' -z 2179689 ']' 00:06:14.687 11:29:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.687 11:29:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:14.687 11:29:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.687 11:29:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:14.687 11:29:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.687 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2179689) - No such process 00:06:14.687 ERROR: process (pid: 2179689) is no longer running 00:06:14.687 11:29:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:14.687 11:29:44 -- common/autotest_common.sh@852 -- # return 1 00:06:14.687 11:29:44 -- common/autotest_common.sh@643 -- # es=1 00:06:14.687 11:29:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:14.687 11:29:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:14.687 11:29:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:14.687 11:29:44 -- event/cpu_locks.sh@54 -- # no_locks 00:06:14.687 11:29:44 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.687 11:29:44 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.687 11:29:44 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.687 00:06:14.687 real 0m1.639s 00:06:14.687 user 0m1.648s 00:06:14.687 sys 0m0.633s 00:06:14.687 11:29:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.687 11:29:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.687 ************************************ 00:06:14.687 END TEST default_locks 00:06:14.687 ************************************ 00:06:14.945 11:29:44 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:14.945 11:29:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:14.945 11:29:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:14.945 11:29:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.945 ************************************ 00:06:14.945 START TEST default_locks_via_rpc 00:06:14.945 ************************************ 00:06:14.945 11:29:44 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:14.945 11:29:44 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2180050 00:06:14.945 11:29:44 -- event/cpu_locks.sh@63 -- # waitforlisten 2180050 00:06:14.945 11:29:44 -- common/autotest_common.sh@819 -- # '[' -z 2180050 ']' 00:06:14.945 11:29:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.945 11:29:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:14.945 11:29:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.945 11:29:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:14.945 11:29:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.945 11:29:44 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.945 [2024-07-21 11:29:44.183473] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:14.945 [2024-07-21 11:29:44.183528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180050 ] 00:06:14.945 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.945 [2024-07-21 11:29:44.267552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.945 [2024-07-21 11:29:44.304908] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:14.945 [2024-07-21 11:29:44.305021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.886 11:29:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:15.886 11:29:44 -- common/autotest_common.sh@852 -- # return 0 00:06:15.886 11:29:44 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:15.886 11:29:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.886 11:29:44 -- common/autotest_common.sh@10 -- # set +x 00:06:15.886 11:29:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.886 11:29:44 -- event/cpu_locks.sh@67 -- # no_locks 00:06:15.886 11:29:44 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.886 11:29:44 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.886 11:29:44 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.886 11:29:44 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:15.886 11:29:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.886 11:29:44 -- common/autotest_common.sh@10 -- # set +x 00:06:15.886 11:29:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.886 11:29:44 -- event/cpu_locks.sh@71 -- # locks_exist 2180050 00:06:15.886 11:29:44 -- event/cpu_locks.sh@22 -- # lslocks -p 2180050 00:06:15.886 11:29:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.461 11:29:45 -- event/cpu_locks.sh@73 -- # killprocess 2180050 00:06:16.461 11:29:45 -- common/autotest_common.sh@926 -- # '[' -z 2180050 ']' 00:06:16.461 11:29:45 -- common/autotest_common.sh@930 -- # kill -0 2180050 00:06:16.461 11:29:45 -- common/autotest_common.sh@931 -- # uname 00:06:16.461 11:29:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:16.461 11:29:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2180050 00:06:16.461 11:29:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:16.461 11:29:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:16.461 11:29:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2180050' 00:06:16.461 killing process with pid 2180050 00:06:16.461 11:29:45 -- common/autotest_common.sh@945 -- # kill 2180050 00:06:16.461 11:29:45 -- common/autotest_common.sh@950 -- # wait 2180050 00:06:16.719 00:06:16.719 real 0m1.786s 00:06:16.719 user 0m1.848s 00:06:16.719 sys 0m0.637s 00:06:16.719 11:29:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.719 11:29:45 -- common/autotest_common.sh@10 -- # set +x 00:06:16.719 ************************************ 00:06:16.719 END TEST default_locks_via_rpc 00:06:16.719 ************************************ 00:06:16.719 11:29:45 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:16.719 11:29:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:16.719 11:29:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.719 11:29:45 -- common/autotest_common.sh@10 -- # set +x 00:06:16.719 ************************************ 00:06:16.719 START TEST non_locking_app_on_locked_coremask 00:06:16.719 ************************************ 00:06:16.719 11:29:45 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:16.719 11:29:45 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2180486 00:06:16.719 11:29:45 -- event/cpu_locks.sh@81 -- # waitforlisten 2180486 /var/tmp/spdk.sock 00:06:16.719 11:29:45 -- common/autotest_common.sh@819 -- # '[' -z 2180486 ']' 00:06:16.719 11:29:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.719 11:29:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:16.719 11:29:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.719 11:29:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:16.719 11:29:45 -- common/autotest_common.sh@10 -- # set +x 00:06:16.719 11:29:45 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.719 [2024-07-21 11:29:46.013082] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:16.719 [2024-07-21 11:29:46.013137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180486 ] 00:06:16.719 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.719 [2024-07-21 11:29:46.098085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.719 [2024-07-21 11:29:46.136205] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.719 [2024-07-21 11:29:46.136323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.651 11:29:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:17.651 11:29:46 -- common/autotest_common.sh@852 -- # return 0 00:06:17.651 11:29:46 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2180559 00:06:17.651 11:29:46 -- event/cpu_locks.sh@85 -- # waitforlisten 2180559 /var/tmp/spdk2.sock 00:06:17.651 11:29:46 -- common/autotest_common.sh@819 -- # '[' -z 2180559 ']' 00:06:17.651 11:29:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.651 11:29:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:17.651 11:29:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.651 11:29:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:17.651 11:29:46 -- common/autotest_common.sh@10 -- # set +x 00:06:17.651 11:29:46 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:17.651 [2024-07-21 11:29:46.851088] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:17.651 [2024-07-21 11:29:46.851140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180559 ] 00:06:17.651 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.651 [2024-07-21 11:29:46.970812] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.651 [2024-07-21 11:29:46.970835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.651 [2024-07-21 11:29:47.042894] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.651 [2024-07-21 11:29:47.043008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.214 11:29:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:18.214 11:29:47 -- common/autotest_common.sh@852 -- # return 0 00:06:18.214 11:29:47 -- event/cpu_locks.sh@87 -- # locks_exist 2180486 00:06:18.214 11:29:47 -- event/cpu_locks.sh@22 -- # lslocks -p 2180486 00:06:18.214 11:29:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.146 lslocks: write error 00:06:19.146 11:29:48 -- event/cpu_locks.sh@89 -- # killprocess 2180486 00:06:19.146 11:29:48 -- common/autotest_common.sh@926 -- # '[' -z 2180486 ']' 00:06:19.146 11:29:48 -- common/autotest_common.sh@930 -- # kill -0 2180486 00:06:19.146 11:29:48 -- common/autotest_common.sh@931 -- # uname 00:06:19.146 11:29:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:19.146 11:29:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2180486 00:06:19.404 11:29:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:19.404 11:29:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:19.404 11:29:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2180486' 00:06:19.404 killing process with pid 2180486 00:06:19.404 11:29:48 -- common/autotest_common.sh@945 -- # kill 2180486 00:06:19.404 11:29:48 -- common/autotest_common.sh@950 -- # wait 2180486 00:06:19.969 11:29:49 -- event/cpu_locks.sh@90 -- # killprocess 2180559 00:06:19.969 11:29:49 -- common/autotest_common.sh@926 -- # '[' -z 2180559 ']' 00:06:19.969 11:29:49 -- common/autotest_common.sh@930 -- # kill -0 2180559 00:06:19.969 11:29:49 -- common/autotest_common.sh@931 -- # uname 00:06:19.969 11:29:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:19.969 11:29:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2180559 00:06:19.969 11:29:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:19.969 11:29:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:19.969 11:29:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2180559' 00:06:19.969 killing process with pid 2180559 00:06:19.969 11:29:49 -- common/autotest_common.sh@945 -- # kill 2180559 00:06:19.969 11:29:49 -- common/autotest_common.sh@950 -- # wait 2180559 00:06:20.227 00:06:20.227 real 0m3.550s 00:06:20.227 user 0m3.761s 00:06:20.227 sys 0m1.151s 00:06:20.227 11:29:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.227 11:29:49 -- common/autotest_common.sh@10 -- # set +x 00:06:20.227 ************************************ 00:06:20.227 END TEST non_locking_app_on_locked_coremask 00:06:20.227 ************************************ 00:06:20.227 11:29:49 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:20.227 11:29:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:20.227 11:29:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.227 11:29:49 -- common/autotest_common.sh@10 -- # set +x 00:06:20.227 ************************************ 00:06:20.227 START TEST locking_app_on_unlocked_coremask 00:06:20.227 ************************************ 00:06:20.227 11:29:49 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:20.227 11:29:49 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2181125 00:06:20.227 11:29:49 -- event/cpu_locks.sh@99 -- # waitforlisten 2181125 /var/tmp/spdk.sock 00:06:20.227 11:29:49 -- common/autotest_common.sh@819 -- # '[' -z 2181125 ']' 00:06:20.227 11:29:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.227 11:29:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:20.227 11:29:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.227 11:29:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:20.227 11:29:49 -- common/autotest_common.sh@10 -- # set +x 00:06:20.227 11:29:49 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:20.227 [2024-07-21 11:29:49.609916] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:20.227 [2024-07-21 11:29:49.609972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181125 ] 00:06:20.517 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.517 [2024-07-21 11:29:49.695233] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.517 [2024-07-21 11:29:49.695258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.517 [2024-07-21 11:29:49.732728] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:20.517 [2024-07-21 11:29:49.732847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.085 11:29:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.085 11:29:50 -- common/autotest_common.sh@852 -- # return 0 00:06:21.085 11:29:50 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2181235 00:06:21.085 11:29:50 -- event/cpu_locks.sh@103 -- # waitforlisten 2181235 /var/tmp/spdk2.sock 00:06:21.085 11:29:50 -- common/autotest_common.sh@819 -- # '[' -z 2181235 ']' 00:06:21.085 11:29:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.085 11:29:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.085 11:29:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.085 11:29:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.085 11:29:50 -- common/autotest_common.sh@10 -- # set +x 00:06:21.085 11:29:50 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:21.085 [2024-07-21 11:29:50.441005] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:21.085 [2024-07-21 11:29:50.441063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181235 ] 00:06:21.085 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.341 [2024-07-21 11:29:50.561970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.341 [2024-07-21 11:29:50.634482] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.341 [2024-07-21 11:29:50.634592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.905 11:29:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.905 11:29:51 -- common/autotest_common.sh@852 -- # return 0 00:06:21.905 11:29:51 -- event/cpu_locks.sh@105 -- # locks_exist 2181235 00:06:21.905 11:29:51 -- event/cpu_locks.sh@22 -- # lslocks -p 2181235 00:06:21.905 11:29:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.837 lslocks: write error 00:06:22.837 11:29:52 -- event/cpu_locks.sh@107 -- # killprocess 2181125 00:06:22.837 11:29:52 -- common/autotest_common.sh@926 -- # '[' -z 2181125 ']' 00:06:22.837 11:29:52 -- common/autotest_common.sh@930 -- # kill -0 2181125 00:06:22.837 11:29:52 -- common/autotest_common.sh@931 -- # uname 00:06:22.837 11:29:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:22.837 11:29:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2181125 00:06:22.837 11:29:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:22.837 11:29:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:22.837 11:29:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2181125' 00:06:22.837 killing process with pid 2181125 00:06:22.837 11:29:52 -- common/autotest_common.sh@945 -- # kill 2181125 00:06:22.837 11:29:52 -- common/autotest_common.sh@950 -- # wait 2181125 00:06:23.401 11:29:52 -- event/cpu_locks.sh@108 -- # killprocess 2181235 00:06:23.401 11:29:52 -- common/autotest_common.sh@926 -- # '[' -z 2181235 ']' 00:06:23.401 11:29:52 -- common/autotest_common.sh@930 -- # kill -0 2181235 00:06:23.401 11:29:52 -- common/autotest_common.sh@931 -- # uname 00:06:23.401 11:29:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:23.401 11:29:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2181235 00:06:23.401 11:29:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:23.401 11:29:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:23.401 11:29:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2181235' 00:06:23.401 killing process with pid 2181235 00:06:23.401 11:29:52 -- common/autotest_common.sh@945 -- # kill 2181235 00:06:23.401 11:29:52 -- common/autotest_common.sh@950 -- # wait 2181235 00:06:23.658 00:06:23.658 real 0m3.446s 00:06:23.658 user 0m3.636s 00:06:23.658 sys 0m1.167s 00:06:23.658 11:29:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.658 11:29:53 -- common/autotest_common.sh@10 -- # set +x 00:06:23.658 ************************************ 00:06:23.658 END TEST locking_app_on_unlocked_coremask 00:06:23.658 ************************************ 00:06:23.658 11:29:53 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:23.658 11:29:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.658 11:29:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.658 11:29:53 -- common/autotest_common.sh@10 -- # set +x 00:06:23.658 ************************************ 00:06:23.658 START TEST locking_app_on_locked_coremask 00:06:23.658 ************************************ 00:06:23.658 11:29:53 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:23.658 11:29:53 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2181717 00:06:23.658 11:29:53 -- event/cpu_locks.sh@116 -- # waitforlisten 2181717 /var/tmp/spdk.sock 00:06:23.658 11:29:53 -- common/autotest_common.sh@819 -- # '[' -z 2181717 ']' 00:06:23.658 11:29:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.658 11:29:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.658 11:29:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.658 11:29:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.658 11:29:53 -- common/autotest_common.sh@10 -- # set +x 00:06:23.658 11:29:53 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.915 [2024-07-21 11:29:53.098953] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:23.915 [2024-07-21 11:29:53.099007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181717 ] 00:06:23.915 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.915 [2024-07-21 11:29:53.183491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.915 [2024-07-21 11:29:53.220672] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:23.915 [2024-07-21 11:29:53.220784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.479 11:29:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.479 11:29:53 -- common/autotest_common.sh@852 -- # return 0 00:06:24.479 11:29:53 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2181979 00:06:24.479 11:29:53 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2181979 /var/tmp/spdk2.sock 00:06:24.479 11:29:53 -- common/autotest_common.sh@640 -- # local es=0 00:06:24.479 11:29:53 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2181979 /var/tmp/spdk2.sock 00:06:24.479 11:29:53 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:24.479 11:29:53 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.479 11:29:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.479 11:29:53 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:24.479 11:29:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.479 11:29:53 -- common/autotest_common.sh@643 -- # waitforlisten 2181979 /var/tmp/spdk2.sock 00:06:24.479 11:29:53 -- common/autotest_common.sh@819 -- # '[' -z 2181979 ']' 00:06:24.479 11:29:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.479 11:29:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.480 11:29:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.480 11:29:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.480 11:29:53 -- common/autotest_common.sh@10 -- # set +x 00:06:24.736 [2024-07-21 11:29:53.928415] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:24.736 [2024-07-21 11:29:53.928469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181979 ] 00:06:24.736 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.736 [2024-07-21 11:29:54.047713] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2181717 has claimed it. 00:06:24.737 [2024-07-21 11:29:54.047751] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.301 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2181979) - No such process 00:06:25.301 ERROR: process (pid: 2181979) is no longer running 00:06:25.301 11:29:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.301 11:29:54 -- common/autotest_common.sh@852 -- # return 1 00:06:25.301 11:29:54 -- common/autotest_common.sh@643 -- # es=1 00:06:25.301 11:29:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:25.301 11:29:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:25.301 11:29:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:25.301 11:29:54 -- event/cpu_locks.sh@122 -- # locks_exist 2181717 00:06:25.301 11:29:54 -- event/cpu_locks.sh@22 -- # lslocks -p 2181717 00:06:25.301 11:29:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.865 lslocks: write error 00:06:25.865 11:29:55 -- event/cpu_locks.sh@124 -- # killprocess 2181717 00:06:25.865 11:29:55 -- common/autotest_common.sh@926 -- # '[' -z 2181717 ']' 00:06:25.865 11:29:55 -- common/autotest_common.sh@930 -- # kill -0 2181717 00:06:25.865 11:29:55 -- common/autotest_common.sh@931 -- # uname 00:06:25.865 11:29:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:25.865 11:29:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2181717 00:06:25.865 11:29:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:25.865 11:29:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:25.865 11:29:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2181717' 00:06:25.865 killing process with pid 2181717 00:06:25.865 11:29:55 -- common/autotest_common.sh@945 -- # kill 2181717 00:06:25.865 11:29:55 -- common/autotest_common.sh@950 -- # wait 2181717 00:06:26.122 00:06:26.122 real 0m2.370s 00:06:26.122 user 0m2.578s 00:06:26.122 sys 0m0.708s 00:06:26.122 11:29:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.122 11:29:55 -- common/autotest_common.sh@10 -- # set +x 00:06:26.122 ************************************ 00:06:26.122 END TEST locking_app_on_locked_coremask 00:06:26.122 ************************************ 00:06:26.122 11:29:55 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:26.122 11:29:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:26.122 11:29:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.122 11:29:55 -- common/autotest_common.sh@10 -- # set +x 00:06:26.122 ************************************ 00:06:26.122 START TEST locking_overlapped_coremask 00:06:26.122 ************************************ 00:06:26.122 11:29:55 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:26.122 11:29:55 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2182279 00:06:26.122 11:29:55 -- event/cpu_locks.sh@133 -- # waitforlisten 2182279 /var/tmp/spdk.sock 00:06:26.122 11:29:55 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:26.122 11:29:55 -- common/autotest_common.sh@819 -- # '[' -z 2182279 ']' 00:06:26.122 11:29:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.122 11:29:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.122 11:29:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.123 11:29:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.123 11:29:55 -- common/autotest_common.sh@10 -- # set +x 00:06:26.123 [2024-07-21 11:29:55.523429] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:26.123 [2024-07-21 11:29:55.523486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182279 ] 00:06:26.380 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.380 [2024-07-21 11:29:55.607382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.380 [2024-07-21 11:29:55.645996] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.380 [2024-07-21 11:29:55.646139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.380 [2024-07-21 11:29:55.646213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.380 [2024-07-21 11:29:55.646214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.946 11:29:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.946 11:29:56 -- common/autotest_common.sh@852 -- # return 0 00:06:26.946 11:29:56 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2182296 00:06:26.946 11:29:56 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2182296 /var/tmp/spdk2.sock 00:06:26.946 11:29:56 -- common/autotest_common.sh@640 -- # local es=0 00:06:26.946 11:29:56 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2182296 /var/tmp/spdk2.sock 00:06:26.946 11:29:56 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:26.946 11:29:56 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:26.946 11:29:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.946 11:29:56 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:26.946 11:29:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.946 11:29:56 -- common/autotest_common.sh@643 -- # waitforlisten 2182296 /var/tmp/spdk2.sock 00:06:26.946 11:29:56 -- common/autotest_common.sh@819 -- # '[' -z 2182296 ']' 00:06:26.946 11:29:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.946 11:29:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.946 11:29:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.946 11:29:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.946 11:29:56 -- common/autotest_common.sh@10 -- # set +x 00:06:26.946 [2024-07-21 11:29:56.365123] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:26.946 [2024-07-21 11:29:56.365181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182296 ] 00:06:27.203 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.203 [2024-07-21 11:29:56.491109] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2182279 has claimed it. 00:06:27.203 [2024-07-21 11:29:56.491149] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.767 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2182296) - No such process 00:06:27.767 ERROR: process (pid: 2182296) is no longer running 00:06:27.767 11:29:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:27.767 11:29:56 -- common/autotest_common.sh@852 -- # return 1 00:06:27.767 11:29:56 -- common/autotest_common.sh@643 -- # es=1 00:06:27.767 11:29:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:27.767 11:29:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:27.767 11:29:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:27.767 11:29:56 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:27.767 11:29:56 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.767 11:29:56 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.767 11:29:56 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.767 11:29:56 -- event/cpu_locks.sh@141 -- # killprocess 2182279 00:06:27.767 11:29:56 -- common/autotest_common.sh@926 -- # '[' -z 2182279 ']' 00:06:27.767 11:29:56 -- common/autotest_common.sh@930 -- # kill -0 2182279 00:06:27.767 11:29:56 -- common/autotest_common.sh@931 -- # uname 00:06:27.767 11:29:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:27.767 11:29:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2182279 00:06:27.767 11:29:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:27.767 11:29:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:27.767 11:29:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2182279' 00:06:27.767 killing process with pid 2182279 00:06:27.767 11:29:57 -- common/autotest_common.sh@945 -- # kill 2182279 00:06:27.767 11:29:57 -- common/autotest_common.sh@950 -- # wait 2182279 00:06:28.024 00:06:28.024 real 0m1.865s 00:06:28.024 user 0m5.231s 00:06:28.024 sys 0m0.492s 00:06:28.024 11:29:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.024 11:29:57 -- common/autotest_common.sh@10 -- # set +x 00:06:28.024 ************************************ 00:06:28.024 END TEST locking_overlapped_coremask 00:06:28.024 ************************************ 00:06:28.024 11:29:57 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:28.024 11:29:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:28.024 11:29:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.024 11:29:57 -- common/autotest_common.sh@10 -- # set +x 00:06:28.024 ************************************ 00:06:28.024 START TEST locking_overlapped_coremask_via_rpc 00:06:28.024 ************************************ 00:06:28.024 11:29:57 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:28.024 11:29:57 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2182584 00:06:28.024 11:29:57 -- event/cpu_locks.sh@149 -- # waitforlisten 2182584 /var/tmp/spdk.sock 00:06:28.024 11:29:57 -- common/autotest_common.sh@819 -- # '[' -z 2182584 ']' 00:06:28.024 11:29:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.024 11:29:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.024 11:29:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.024 11:29:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.024 11:29:57 -- common/autotest_common.sh@10 -- # set +x 00:06:28.024 11:29:57 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:28.024 [2024-07-21 11:29:57.431378] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:28.024 [2024-07-21 11:29:57.431432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182584 ] 00:06:28.281 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.281 [2024-07-21 11:29:57.513927] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.281 [2024-07-21 11:29:57.513951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.281 [2024-07-21 11:29:57.553242] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.281 [2024-07-21 11:29:57.553378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.281 [2024-07-21 11:29:57.553457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.281 [2024-07-21 11:29:57.553457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.844 11:29:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.844 11:29:58 -- common/autotest_common.sh@852 -- # return 0 00:06:28.844 11:29:58 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2182763 00:06:28.844 11:29:58 -- event/cpu_locks.sh@153 -- # waitforlisten 2182763 /var/tmp/spdk2.sock 00:06:28.844 11:29:58 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:28.844 11:29:58 -- common/autotest_common.sh@819 -- # '[' -z 2182763 ']' 00:06:28.844 11:29:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.844 11:29:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.844 11:29:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.844 11:29:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.844 11:29:58 -- common/autotest_common.sh@10 -- # set +x 00:06:29.102 [2024-07-21 11:29:58.276701] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:29.102 [2024-07-21 11:29:58.276756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182763 ] 00:06:29.102 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.102 [2024-07-21 11:29:58.397318] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.102 [2024-07-21 11:29:58.397340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.102 [2024-07-21 11:29:58.470685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:29.102 [2024-07-21 11:29:58.470846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.102 [2024-07-21 11:29:58.470947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.102 [2024-07-21 11:29:58.470947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:29.668 11:29:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.668 11:29:59 -- common/autotest_common.sh@852 -- # return 0 00:06:29.668 11:29:59 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.668 11:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:29.668 11:29:59 -- common/autotest_common.sh@10 -- # set +x 00:06:29.668 11:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:29.668 11:29:59 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.668 11:29:59 -- common/autotest_common.sh@640 -- # local es=0 00:06:29.668 11:29:59 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.668 11:29:59 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:29.668 11:29:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:29.668 11:29:59 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:29.668 11:29:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:29.668 11:29:59 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.668 11:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:29.668 11:29:59 -- common/autotest_common.sh@10 -- # set +x 00:06:29.668 [2024-07-21 11:29:59.084695] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2182584 has claimed it. 00:06:29.926 request: 00:06:29.926 { 00:06:29.926 "method": "framework_enable_cpumask_locks", 00:06:29.926 "req_id": 1 00:06:29.926 } 00:06:29.926 Got JSON-RPC error response 00:06:29.926 response: 00:06:29.926 { 00:06:29.926 "code": -32603, 00:06:29.926 "message": "Failed to claim CPU core: 2" 00:06:29.926 } 00:06:29.926 11:29:59 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:29.926 11:29:59 -- common/autotest_common.sh@643 -- # es=1 00:06:29.926 11:29:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:29.926 11:29:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:29.926 11:29:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:29.926 11:29:59 -- event/cpu_locks.sh@158 -- # waitforlisten 2182584 /var/tmp/spdk.sock 00:06:29.926 11:29:59 -- common/autotest_common.sh@819 -- # '[' -z 2182584 ']' 00:06:29.926 11:29:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.926 11:29:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.926 11:29:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.926 11:29:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.926 11:29:59 -- common/autotest_common.sh@10 -- # set +x 00:06:29.926 11:29:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.926 11:29:59 -- common/autotest_common.sh@852 -- # return 0 00:06:29.926 11:29:59 -- event/cpu_locks.sh@159 -- # waitforlisten 2182763 /var/tmp/spdk2.sock 00:06:29.926 11:29:59 -- common/autotest_common.sh@819 -- # '[' -z 2182763 ']' 00:06:29.926 11:29:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.926 11:29:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.926 11:29:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.926 11:29:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.926 11:29:59 -- common/autotest_common.sh@10 -- # set +x 00:06:30.184 11:29:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:30.184 11:29:59 -- common/autotest_common.sh@852 -- # return 0 00:06:30.184 11:29:59 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:30.184 11:29:59 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.184 11:29:59 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.184 11:29:59 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.184 00:06:30.184 real 0m2.061s 00:06:30.184 user 0m0.814s 00:06:30.184 sys 0m0.178s 00:06:30.184 11:29:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.184 11:29:59 -- common/autotest_common.sh@10 -- # set +x 00:06:30.184 ************************************ 00:06:30.184 END TEST locking_overlapped_coremask_via_rpc 00:06:30.184 ************************************ 00:06:30.184 11:29:59 -- event/cpu_locks.sh@174 -- # cleanup 00:06:30.184 11:29:59 -- event/cpu_locks.sh@15 -- # [[ -z 2182584 ]] 00:06:30.184 11:29:59 -- event/cpu_locks.sh@15 -- # killprocess 2182584 00:06:30.184 11:29:59 -- common/autotest_common.sh@926 -- # '[' -z 2182584 ']' 00:06:30.184 11:29:59 -- common/autotest_common.sh@930 -- # kill -0 2182584 00:06:30.184 11:29:59 -- common/autotest_common.sh@931 -- # uname 00:06:30.184 11:29:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:30.184 11:29:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2182584 00:06:30.185 11:29:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:30.185 11:29:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:30.185 11:29:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2182584' 00:06:30.185 killing process with pid 2182584 00:06:30.185 11:29:59 -- common/autotest_common.sh@945 -- # kill 2182584 00:06:30.185 11:29:59 -- common/autotest_common.sh@950 -- # wait 2182584 00:06:30.443 11:29:59 -- event/cpu_locks.sh@16 -- # [[ -z 2182763 ]] 00:06:30.443 11:29:59 -- event/cpu_locks.sh@16 -- # killprocess 2182763 00:06:30.443 11:29:59 -- common/autotest_common.sh@926 -- # '[' -z 2182763 ']' 00:06:30.443 11:29:59 -- common/autotest_common.sh@930 -- # kill -0 2182763 00:06:30.443 11:29:59 -- common/autotest_common.sh@931 -- # uname 00:06:30.443 11:29:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:30.443 11:29:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2182763 00:06:30.700 11:29:59 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:30.700 11:29:59 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:30.700 11:29:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2182763' 00:06:30.700 killing process with pid 2182763 00:06:30.700 11:29:59 -- common/autotest_common.sh@945 -- # kill 2182763 00:06:30.700 11:29:59 -- common/autotest_common.sh@950 -- # wait 2182763 00:06:30.958 11:30:00 -- event/cpu_locks.sh@18 -- # rm -f 00:06:30.958 11:30:00 -- event/cpu_locks.sh@1 -- # cleanup 00:06:30.958 11:30:00 -- event/cpu_locks.sh@15 -- # [[ -z 2182584 ]] 00:06:30.958 11:30:00 -- event/cpu_locks.sh@15 -- # killprocess 2182584 00:06:30.958 11:30:00 -- common/autotest_common.sh@926 -- # '[' -z 2182584 ']' 00:06:30.958 11:30:00 -- common/autotest_common.sh@930 -- # kill -0 2182584 00:06:30.958 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2182584) - No such process 00:06:30.958 11:30:00 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2182584 is not found' 00:06:30.958 Process with pid 2182584 is not found 00:06:30.958 11:30:00 -- event/cpu_locks.sh@16 -- # [[ -z 2182763 ]] 00:06:30.958 11:30:00 -- event/cpu_locks.sh@16 -- # killprocess 2182763 00:06:30.958 11:30:00 -- common/autotest_common.sh@926 -- # '[' -z 2182763 ']' 00:06:30.958 11:30:00 -- common/autotest_common.sh@930 -- # kill -0 2182763 00:06:30.958 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2182763) - No such process 00:06:30.958 11:30:00 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2182763 is not found' 00:06:30.958 Process with pid 2182763 is not found 00:06:30.958 11:30:00 -- event/cpu_locks.sh@18 -- # rm -f 00:06:30.958 00:06:30.958 real 0m17.865s 00:06:30.958 user 0m29.891s 00:06:30.958 sys 0m5.895s 00:06:30.958 11:30:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.958 11:30:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.958 ************************************ 00:06:30.958 END TEST cpu_locks 00:06:30.958 ************************************ 00:06:30.958 00:06:30.958 real 0m43.004s 00:06:30.958 user 1m20.930s 00:06:30.958 sys 0m9.917s 00:06:30.958 11:30:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.958 11:30:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.958 ************************************ 00:06:30.958 END TEST event 00:06:30.958 ************************************ 00:06:30.958 11:30:00 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:30.958 11:30:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:30.958 11:30:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.958 11:30:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.958 ************************************ 00:06:30.958 START TEST thread 00:06:30.958 ************************************ 00:06:30.958 11:30:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:31.215 * Looking for test storage... 00:06:31.215 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:31.215 11:30:00 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.215 11:30:00 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:31.215 11:30:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:31.215 11:30:00 -- common/autotest_common.sh@10 -- # set +x 00:06:31.215 ************************************ 00:06:31.215 START TEST thread_poller_perf 00:06:31.216 ************************************ 00:06:31.216 11:30:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.216 [2024-07-21 11:30:00.417730] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:31.216 [2024-07-21 11:30:00.417822] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183260 ] 00:06:31.216 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.216 [2024-07-21 11:30:00.506927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.216 [2024-07-21 11:30:00.544248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.216 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:32.585 ====================================== 00:06:32.585 busy:2510746472 (cyc) 00:06:32.585 total_run_count: 390000 00:06:32.585 tsc_hz: 2500000000 (cyc) 00:06:32.585 ====================================== 00:06:32.585 poller_cost: 6437 (cyc), 2574 (nsec) 00:06:32.585 00:06:32.585 real 0m1.218s 00:06:32.585 user 0m1.115s 00:06:32.585 sys 0m0.098s 00:06:32.585 11:30:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.585 11:30:01 -- common/autotest_common.sh@10 -- # set +x 00:06:32.585 ************************************ 00:06:32.585 END TEST thread_poller_perf 00:06:32.585 ************************************ 00:06:32.585 11:30:01 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.585 11:30:01 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:32.585 11:30:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.585 11:30:01 -- common/autotest_common.sh@10 -- # set +x 00:06:32.585 ************************************ 00:06:32.585 START TEST thread_poller_perf 00:06:32.585 ************************************ 00:06:32.585 11:30:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.585 [2024-07-21 11:30:01.675607] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:32.586 [2024-07-21 11:30:01.675678] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183623 ] 00:06:32.586 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.586 [2024-07-21 11:30:01.756249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.586 [2024-07-21 11:30:01.793218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.586 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:33.532 ====================================== 00:06:33.532 busy:2502329560 (cyc) 00:06:33.532 total_run_count: 5396000 00:06:33.532 tsc_hz: 2500000000 (cyc) 00:06:33.532 ====================================== 00:06:33.532 poller_cost: 463 (cyc), 185 (nsec) 00:06:33.532 00:06:33.532 real 0m1.189s 00:06:33.532 user 0m1.094s 00:06:33.532 sys 0m0.092s 00:06:33.532 11:30:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.532 11:30:02 -- common/autotest_common.sh@10 -- # set +x 00:06:33.532 ************************************ 00:06:33.532 END TEST thread_poller_perf 00:06:33.532 ************************************ 00:06:33.532 11:30:02 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:33.532 00:06:33.532 real 0m2.585s 00:06:33.532 user 0m2.273s 00:06:33.532 sys 0m0.328s 00:06:33.532 11:30:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.532 11:30:02 -- common/autotest_common.sh@10 -- # set +x 00:06:33.532 ************************************ 00:06:33.532 END TEST thread 00:06:33.532 ************************************ 00:06:33.532 11:30:02 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:33.532 11:30:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:33.532 11:30:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.532 11:30:02 -- common/autotest_common.sh@10 -- # set +x 00:06:33.532 ************************************ 00:06:33.532 START TEST accel 00:06:33.532 ************************************ 00:06:33.532 11:30:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:33.796 * Looking for test storage... 00:06:33.796 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:33.796 11:30:03 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:33.796 11:30:03 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:33.796 11:30:03 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:33.796 11:30:03 -- accel/accel.sh@59 -- # spdk_tgt_pid=2183955 00:06:33.796 11:30:03 -- accel/accel.sh@60 -- # waitforlisten 2183955 00:06:33.796 11:30:03 -- common/autotest_common.sh@819 -- # '[' -z 2183955 ']' 00:06:33.796 11:30:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.796 11:30:03 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:33.796 11:30:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.796 11:30:03 -- accel/accel.sh@58 -- # build_accel_config 00:06:33.796 11:30:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.796 11:30:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.796 11:30:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.796 11:30:03 -- common/autotest_common.sh@10 -- # set +x 00:06:33.796 11:30:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.796 11:30:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.796 11:30:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.796 11:30:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.796 11:30:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.796 11:30:03 -- accel/accel.sh@42 -- # jq -r . 00:06:33.796 [2024-07-21 11:30:03.072408] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:33.796 [2024-07-21 11:30:03.072467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183955 ] 00:06:33.796 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.796 [2024-07-21 11:30:03.157329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.796 [2024-07-21 11:30:03.195383] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.796 [2024-07-21 11:30:03.195493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.726 11:30:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.726 11:30:03 -- common/autotest_common.sh@852 -- # return 0 00:06:34.726 11:30:03 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:34.726 11:30:03 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:34.726 11:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:34.726 11:30:03 -- common/autotest_common.sh@10 -- # set +x 00:06:34.726 11:30:03 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:34.726 11:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:34.726 11:30:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # IFS== 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.726 11:30:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.726 11:30:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # IFS== 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.726 11:30:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.726 11:30:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # IFS== 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.726 11:30:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.726 11:30:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # IFS== 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.726 11:30:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.726 11:30:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # IFS== 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.726 11:30:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.726 11:30:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # IFS== 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.726 11:30:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.726 11:30:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # IFS== 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.726 11:30:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.726 11:30:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # IFS== 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.726 11:30:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.726 11:30:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # IFS== 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.726 11:30:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.726 11:30:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # IFS== 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.726 11:30:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.726 11:30:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # IFS== 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.726 11:30:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.726 11:30:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # IFS== 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.726 11:30:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.726 11:30:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # IFS== 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.726 11:30:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.726 11:30:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # IFS== 00:06:34.726 11:30:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.726 11:30:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.726 11:30:03 -- accel/accel.sh@67 -- # killprocess 2183955 00:06:34.726 11:30:03 -- common/autotest_common.sh@926 -- # '[' -z 2183955 ']' 00:06:34.726 11:30:03 -- common/autotest_common.sh@930 -- # kill -0 2183955 00:06:34.726 11:30:03 -- common/autotest_common.sh@931 -- # uname 00:06:34.726 11:30:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:34.726 11:30:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2183955 00:06:34.726 11:30:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:34.726 11:30:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:34.726 11:30:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2183955' 00:06:34.726 killing process with pid 2183955 00:06:34.726 11:30:03 -- common/autotest_common.sh@945 -- # kill 2183955 00:06:34.726 11:30:03 -- common/autotest_common.sh@950 -- # wait 2183955 00:06:34.983 11:30:04 -- accel/accel.sh@68 -- # trap - ERR 00:06:34.983 11:30:04 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:34.983 11:30:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:34.983 11:30:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.983 11:30:04 -- common/autotest_common.sh@10 -- # set +x 00:06:34.983 11:30:04 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:34.983 11:30:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:34.983 11:30:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.983 11:30:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.983 11:30:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.983 11:30:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.983 11:30:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.983 11:30:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.983 11:30:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.983 11:30:04 -- accel/accel.sh@42 -- # jq -r . 00:06:34.983 11:30:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.983 11:30:04 -- common/autotest_common.sh@10 -- # set +x 00:06:34.983 11:30:04 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:34.983 11:30:04 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:34.983 11:30:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.983 11:30:04 -- common/autotest_common.sh@10 -- # set +x 00:06:34.983 ************************************ 00:06:34.983 START TEST accel_missing_filename 00:06:34.983 ************************************ 00:06:34.983 11:30:04 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:34.983 11:30:04 -- common/autotest_common.sh@640 -- # local es=0 00:06:34.983 11:30:04 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:34.983 11:30:04 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:34.983 11:30:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:34.983 11:30:04 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:34.983 11:30:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:34.983 11:30:04 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:34.983 11:30:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:34.983 11:30:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.983 11:30:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.983 11:30:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.983 11:30:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.983 11:30:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.983 11:30:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.983 11:30:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.983 11:30:04 -- accel/accel.sh@42 -- # jq -r . 00:06:34.983 [2024-07-21 11:30:04.383176] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:34.983 [2024-07-21 11:30:04.383267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2184303 ] 00:06:35.239 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.239 [2024-07-21 11:30:04.472566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.239 [2024-07-21 11:30:04.509316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.239 [2024-07-21 11:30:04.550312] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.239 [2024-07-21 11:30:04.610295] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:35.499 A filename is required. 00:06:35.499 11:30:04 -- common/autotest_common.sh@643 -- # es=234 00:06:35.499 11:30:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:35.499 11:30:04 -- common/autotest_common.sh@652 -- # es=106 00:06:35.499 11:30:04 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:35.499 11:30:04 -- common/autotest_common.sh@660 -- # es=1 00:06:35.499 11:30:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:35.499 00:06:35.499 real 0m0.321s 00:06:35.499 user 0m0.213s 00:06:35.499 sys 0m0.146s 00:06:35.499 11:30:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.499 11:30:04 -- common/autotest_common.sh@10 -- # set +x 00:06:35.499 ************************************ 00:06:35.499 END TEST accel_missing_filename 00:06:35.499 ************************************ 00:06:35.499 11:30:04 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:35.499 11:30:04 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:35.499 11:30:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.499 11:30:04 -- common/autotest_common.sh@10 -- # set +x 00:06:35.499 ************************************ 00:06:35.499 START TEST accel_compress_verify 00:06:35.499 ************************************ 00:06:35.499 11:30:04 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:35.499 11:30:04 -- common/autotest_common.sh@640 -- # local es=0 00:06:35.499 11:30:04 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:35.499 11:30:04 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:35.499 11:30:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.499 11:30:04 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:35.499 11:30:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.499 11:30:04 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:35.499 11:30:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.499 11:30:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.499 11:30:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:35.499 11:30:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.499 11:30:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.499 11:30:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.499 11:30:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.499 11:30:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.499 11:30:04 -- accel/accel.sh@42 -- # jq -r . 00:06:35.499 [2024-07-21 11:30:04.749257] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:35.499 [2024-07-21 11:30:04.749322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2184473 ] 00:06:35.499 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.499 [2024-07-21 11:30:04.834485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.499 [2024-07-21 11:30:04.869459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.499 [2024-07-21 11:30:04.910543] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.769 [2024-07-21 11:30:04.970739] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:35.769 00:06:35.769 Compression does not support the verify option, aborting. 00:06:35.769 11:30:05 -- common/autotest_common.sh@643 -- # es=161 00:06:35.769 11:30:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:35.769 11:30:05 -- common/autotest_common.sh@652 -- # es=33 00:06:35.769 11:30:05 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:35.769 11:30:05 -- common/autotest_common.sh@660 -- # es=1 00:06:35.769 11:30:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:35.769 00:06:35.769 real 0m0.313s 00:06:35.769 user 0m0.195s 00:06:35.769 sys 0m0.138s 00:06:35.769 11:30:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.769 11:30:05 -- common/autotest_common.sh@10 -- # set +x 00:06:35.769 ************************************ 00:06:35.769 END TEST accel_compress_verify 00:06:35.769 ************************************ 00:06:35.769 11:30:05 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:35.769 11:30:05 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:35.769 11:30:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.769 11:30:05 -- common/autotest_common.sh@10 -- # set +x 00:06:35.769 ************************************ 00:06:35.769 START TEST accel_wrong_workload 00:06:35.769 ************************************ 00:06:35.769 11:30:05 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:35.769 11:30:05 -- common/autotest_common.sh@640 -- # local es=0 00:06:35.769 11:30:05 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:35.769 11:30:05 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:35.769 11:30:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.769 11:30:05 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:35.769 11:30:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.769 11:30:05 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:35.769 11:30:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:35.769 11:30:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.769 11:30:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.769 11:30:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.769 11:30:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.769 11:30:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.769 11:30:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.769 11:30:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.769 11:30:05 -- accel/accel.sh@42 -- # jq -r . 00:06:35.769 Unsupported workload type: foobar 00:06:35.769 [2024-07-21 11:30:05.109328] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:35.769 accel_perf options: 00:06:35.769 [-h help message] 00:06:35.769 [-q queue depth per core] 00:06:35.769 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:35.769 [-T number of threads per core 00:06:35.769 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:35.769 [-t time in seconds] 00:06:35.769 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:35.769 [ dif_verify, , dif_generate, dif_generate_copy 00:06:35.769 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:35.769 [-l for compress/decompress workloads, name of uncompressed input file 00:06:35.769 [-S for crc32c workload, use this seed value (default 0) 00:06:35.769 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:35.769 [-f for fill workload, use this BYTE value (default 255) 00:06:35.769 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:35.769 [-y verify result if this switch is on] 00:06:35.769 [-a tasks to allocate per core (default: same value as -q)] 00:06:35.769 Can be used to spread operations across a wider range of memory. 00:06:35.769 11:30:05 -- common/autotest_common.sh@643 -- # es=1 00:06:35.769 11:30:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:35.769 11:30:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:35.769 11:30:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:35.769 00:06:35.769 real 0m0.037s 00:06:35.769 user 0m0.021s 00:06:35.769 sys 0m0.016s 00:06:35.769 11:30:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.769 11:30:05 -- common/autotest_common.sh@10 -- # set +x 00:06:35.769 ************************************ 00:06:35.769 END TEST accel_wrong_workload 00:06:35.769 ************************************ 00:06:35.769 Error: writing output failed: Broken pipe 00:06:35.769 11:30:05 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:35.769 11:30:05 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:35.769 11:30:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.769 11:30:05 -- common/autotest_common.sh@10 -- # set +x 00:06:35.769 ************************************ 00:06:35.769 START TEST accel_negative_buffers 00:06:35.769 ************************************ 00:06:35.769 11:30:05 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:35.769 11:30:05 -- common/autotest_common.sh@640 -- # local es=0 00:06:35.769 11:30:05 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:35.769 11:30:05 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:35.769 11:30:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.769 11:30:05 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:35.769 11:30:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.769 11:30:05 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:35.769 11:30:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:35.769 11:30:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.769 11:30:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.769 11:30:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.769 11:30:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.769 11:30:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.769 11:30:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.769 11:30:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.769 11:30:05 -- accel/accel.sh@42 -- # jq -r . 00:06:36.040 -x option must be non-negative. 00:06:36.040 [2024-07-21 11:30:05.188985] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:36.040 accel_perf options: 00:06:36.040 [-h help message] 00:06:36.040 [-q queue depth per core] 00:06:36.040 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:36.040 [-T number of threads per core 00:06:36.040 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:36.040 [-t time in seconds] 00:06:36.040 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:36.040 [ dif_verify, , dif_generate, dif_generate_copy 00:06:36.040 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:36.040 [-l for compress/decompress workloads, name of uncompressed input file 00:06:36.040 [-S for crc32c workload, use this seed value (default 0) 00:06:36.040 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:36.040 [-f for fill workload, use this BYTE value (default 255) 00:06:36.040 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:36.040 [-y verify result if this switch is on] 00:06:36.040 [-a tasks to allocate per core (default: same value as -q)] 00:06:36.040 Can be used to spread operations across a wider range of memory. 00:06:36.040 11:30:05 -- common/autotest_common.sh@643 -- # es=1 00:06:36.040 11:30:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:36.040 11:30:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:36.040 11:30:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:36.040 00:06:36.040 real 0m0.035s 00:06:36.040 user 0m0.016s 00:06:36.040 sys 0m0.019s 00:06:36.040 11:30:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.040 11:30:05 -- common/autotest_common.sh@10 -- # set +x 00:06:36.040 ************************************ 00:06:36.040 END TEST accel_negative_buffers 00:06:36.040 ************************************ 00:06:36.040 Error: writing output failed: Broken pipe 00:06:36.040 11:30:05 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:36.040 11:30:05 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:36.040 11:30:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.040 11:30:05 -- common/autotest_common.sh@10 -- # set +x 00:06:36.040 ************************************ 00:06:36.040 START TEST accel_crc32c 00:06:36.040 ************************************ 00:06:36.040 11:30:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:36.040 11:30:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.040 11:30:05 -- accel/accel.sh@17 -- # local accel_module 00:06:36.040 11:30:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:36.040 11:30:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:36.040 11:30:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.040 11:30:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.040 11:30:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.040 11:30:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.040 11:30:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.040 11:30:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.040 11:30:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.040 11:30:05 -- accel/accel.sh@42 -- # jq -r . 00:06:36.041 [2024-07-21 11:30:05.269948] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:36.041 [2024-07-21 11:30:05.270016] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2184646 ] 00:06:36.041 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.041 [2024-07-21 11:30:05.354595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.041 [2024-07-21 11:30:05.391322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.413 11:30:06 -- accel/accel.sh@18 -- # out=' 00:06:37.413 SPDK Configuration: 00:06:37.413 Core mask: 0x1 00:06:37.413 00:06:37.413 Accel Perf Configuration: 00:06:37.413 Workload Type: crc32c 00:06:37.413 CRC-32C seed: 32 00:06:37.413 Transfer size: 4096 bytes 00:06:37.413 Vector count 1 00:06:37.413 Module: software 00:06:37.413 Queue depth: 32 00:06:37.413 Allocate depth: 32 00:06:37.413 # threads/core: 1 00:06:37.413 Run time: 1 seconds 00:06:37.413 Verify: Yes 00:06:37.413 00:06:37.413 Running for 1 seconds... 00:06:37.413 00:06:37.413 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.413 ------------------------------------------------------------------------------------ 00:06:37.413 0,0 577728/s 2256 MiB/s 0 0 00:06:37.413 ==================================================================================== 00:06:37.413 Total 577728/s 2256 MiB/s 0 0' 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.413 11:30:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:37.413 11:30:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:37.413 11:30:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.413 11:30:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.413 11:30:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.413 11:30:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.413 11:30:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.413 11:30:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.413 11:30:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.413 11:30:06 -- accel/accel.sh@42 -- # jq -r . 00:06:37.413 [2024-07-21 11:30:06.584211] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:37.413 [2024-07-21 11:30:06.584293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185036 ] 00:06:37.413 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.413 [2024-07-21 11:30:06.666841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.413 [2024-07-21 11:30:06.701474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.413 11:30:06 -- accel/accel.sh@21 -- # val= 00:06:37.413 11:30:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.413 11:30:06 -- accel/accel.sh@21 -- # val= 00:06:37.413 11:30:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.413 11:30:06 -- accel/accel.sh@21 -- # val=0x1 00:06:37.413 11:30:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.413 11:30:06 -- accel/accel.sh@21 -- # val= 00:06:37.413 11:30:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.413 11:30:06 -- accel/accel.sh@21 -- # val= 00:06:37.413 11:30:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.413 11:30:06 -- accel/accel.sh@21 -- # val=crc32c 00:06:37.413 11:30:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.413 11:30:06 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.413 11:30:06 -- accel/accel.sh@21 -- # val=32 00:06:37.413 11:30:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.413 11:30:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:37.413 11:30:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.413 11:30:06 -- accel/accel.sh@21 -- # val= 00:06:37.413 11:30:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.413 11:30:06 -- accel/accel.sh@21 -- # val=software 00:06:37.413 11:30:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.413 11:30:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.413 11:30:06 -- accel/accel.sh@21 -- # val=32 00:06:37.413 11:30:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.413 11:30:06 -- accel/accel.sh@21 -- # val=32 00:06:37.413 11:30:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.413 11:30:06 -- accel/accel.sh@21 -- # val=1 00:06:37.413 11:30:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.413 11:30:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.413 11:30:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.413 11:30:06 -- accel/accel.sh@21 -- # val=Yes 00:06:37.413 11:30:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.413 11:30:06 -- accel/accel.sh@21 -- # val= 00:06:37.413 11:30:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.413 11:30:06 -- accel/accel.sh@21 -- # val= 00:06:37.413 11:30:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # IFS=: 00:06:37.413 11:30:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.787 11:30:07 -- accel/accel.sh@21 -- # val= 00:06:38.787 11:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.787 11:30:07 -- accel/accel.sh@20 -- # IFS=: 00:06:38.787 11:30:07 -- accel/accel.sh@20 -- # read -r var val 00:06:38.787 11:30:07 -- accel/accel.sh@21 -- # val= 00:06:38.787 11:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.787 11:30:07 -- accel/accel.sh@20 -- # IFS=: 00:06:38.787 11:30:07 -- accel/accel.sh@20 -- # read -r var val 00:06:38.787 11:30:07 -- accel/accel.sh@21 -- # val= 00:06:38.787 11:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.787 11:30:07 -- accel/accel.sh@20 -- # IFS=: 00:06:38.787 11:30:07 -- accel/accel.sh@20 -- # read -r var val 00:06:38.787 11:30:07 -- accel/accel.sh@21 -- # val= 00:06:38.787 11:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.787 11:30:07 -- accel/accel.sh@20 -- # IFS=: 00:06:38.787 11:30:07 -- accel/accel.sh@20 -- # read -r var val 00:06:38.787 11:30:07 -- accel/accel.sh@21 -- # val= 00:06:38.787 11:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.787 11:30:07 -- accel/accel.sh@20 -- # IFS=: 00:06:38.787 11:30:07 -- accel/accel.sh@20 -- # read -r var val 00:06:38.787 11:30:07 -- accel/accel.sh@21 -- # val= 00:06:38.787 11:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.787 11:30:07 -- accel/accel.sh@20 -- # IFS=: 00:06:38.787 11:30:07 -- accel/accel.sh@20 -- # read -r var val 00:06:38.787 11:30:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.787 11:30:07 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:38.787 11:30:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.787 00:06:38.787 real 0m2.630s 00:06:38.787 user 0m2.362s 00:06:38.787 sys 0m0.278s 00:06:38.788 11:30:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.788 11:30:07 -- common/autotest_common.sh@10 -- # set +x 00:06:38.788 ************************************ 00:06:38.788 END TEST accel_crc32c 00:06:38.788 ************************************ 00:06:38.788 11:30:07 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:38.788 11:30:07 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:38.788 11:30:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.788 11:30:07 -- common/autotest_common.sh@10 -- # set +x 00:06:38.788 ************************************ 00:06:38.788 START TEST accel_crc32c_C2 00:06:38.788 ************************************ 00:06:38.788 11:30:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:38.788 11:30:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.788 11:30:07 -- accel/accel.sh@17 -- # local accel_module 00:06:38.788 11:30:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:38.788 11:30:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:38.788 11:30:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.788 11:30:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.788 11:30:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.788 11:30:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.788 11:30:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.788 11:30:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.788 11:30:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.788 11:30:07 -- accel/accel.sh@42 -- # jq -r . 00:06:38.788 [2024-07-21 11:30:07.948003] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:38.788 [2024-07-21 11:30:07.948091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185321 ] 00:06:38.788 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.788 [2024-07-21 11:30:08.032696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.788 [2024-07-21 11:30:08.068478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.162 11:30:09 -- accel/accel.sh@18 -- # out=' 00:06:40.162 SPDK Configuration: 00:06:40.162 Core mask: 0x1 00:06:40.162 00:06:40.162 Accel Perf Configuration: 00:06:40.162 Workload Type: crc32c 00:06:40.162 CRC-32C seed: 0 00:06:40.162 Transfer size: 4096 bytes 00:06:40.162 Vector count 2 00:06:40.162 Module: software 00:06:40.162 Queue depth: 32 00:06:40.162 Allocate depth: 32 00:06:40.162 # threads/core: 1 00:06:40.162 Run time: 1 seconds 00:06:40.162 Verify: Yes 00:06:40.162 00:06:40.162 Running for 1 seconds... 00:06:40.162 00:06:40.162 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.162 ------------------------------------------------------------------------------------ 00:06:40.162 0,0 475904/s 3718 MiB/s 0 0 00:06:40.162 ==================================================================================== 00:06:40.162 Total 475904/s 1859 MiB/s 0 0' 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.162 11:30:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:40.162 11:30:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.162 11:30:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.162 11:30:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.162 11:30:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:40.162 11:30:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.162 11:30:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.162 11:30:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.162 11:30:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.162 11:30:09 -- accel/accel.sh@42 -- # jq -r . 00:06:40.162 [2024-07-21 11:30:09.261792] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:40.162 [2024-07-21 11:30:09.261870] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185596 ] 00:06:40.162 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.162 [2024-07-21 11:30:09.345336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.162 [2024-07-21 11:30:09.379786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.162 11:30:09 -- accel/accel.sh@21 -- # val= 00:06:40.162 11:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.162 11:30:09 -- accel/accel.sh@21 -- # val= 00:06:40.162 11:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.162 11:30:09 -- accel/accel.sh@21 -- # val=0x1 00:06:40.162 11:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.162 11:30:09 -- accel/accel.sh@21 -- # val= 00:06:40.162 11:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.162 11:30:09 -- accel/accel.sh@21 -- # val= 00:06:40.162 11:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.162 11:30:09 -- accel/accel.sh@21 -- # val=crc32c 00:06:40.162 11:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.162 11:30:09 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.162 11:30:09 -- accel/accel.sh@21 -- # val=0 00:06:40.162 11:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.162 11:30:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.162 11:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.162 11:30:09 -- accel/accel.sh@21 -- # val= 00:06:40.162 11:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.162 11:30:09 -- accel/accel.sh@21 -- # val=software 00:06:40.162 11:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.162 11:30:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.162 11:30:09 -- accel/accel.sh@21 -- # val=32 00:06:40.162 11:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.162 11:30:09 -- accel/accel.sh@21 -- # val=32 00:06:40.162 11:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.162 11:30:09 -- accel/accel.sh@21 -- # val=1 00:06:40.162 11:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.162 11:30:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.162 11:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.162 11:30:09 -- accel/accel.sh@21 -- # val=Yes 00:06:40.162 11:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.162 11:30:09 -- accel/accel.sh@21 -- # val= 00:06:40.162 11:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.162 11:30:09 -- accel/accel.sh@21 -- # val= 00:06:40.162 11:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # IFS=: 00:06:40.162 11:30:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.534 11:30:10 -- accel/accel.sh@21 -- # val= 00:06:41.534 11:30:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.534 11:30:10 -- accel/accel.sh@20 -- # IFS=: 00:06:41.534 11:30:10 -- accel/accel.sh@20 -- # read -r var val 00:06:41.534 11:30:10 -- accel/accel.sh@21 -- # val= 00:06:41.534 11:30:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.534 11:30:10 -- accel/accel.sh@20 -- # IFS=: 00:06:41.534 11:30:10 -- accel/accel.sh@20 -- # read -r var val 00:06:41.534 11:30:10 -- accel/accel.sh@21 -- # val= 00:06:41.534 11:30:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.534 11:30:10 -- accel/accel.sh@20 -- # IFS=: 00:06:41.534 11:30:10 -- accel/accel.sh@20 -- # read -r var val 00:06:41.534 11:30:10 -- accel/accel.sh@21 -- # val= 00:06:41.534 11:30:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.534 11:30:10 -- accel/accel.sh@20 -- # IFS=: 00:06:41.534 11:30:10 -- accel/accel.sh@20 -- # read -r var val 00:06:41.534 11:30:10 -- accel/accel.sh@21 -- # val= 00:06:41.534 11:30:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.534 11:30:10 -- accel/accel.sh@20 -- # IFS=: 00:06:41.534 11:30:10 -- accel/accel.sh@20 -- # read -r var val 00:06:41.534 11:30:10 -- accel/accel.sh@21 -- # val= 00:06:41.534 11:30:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.534 11:30:10 -- accel/accel.sh@20 -- # IFS=: 00:06:41.534 11:30:10 -- accel/accel.sh@20 -- # read -r var val 00:06:41.534 11:30:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.534 11:30:10 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:41.534 11:30:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.534 00:06:41.534 real 0m2.633s 00:06:41.534 user 0m2.367s 00:06:41.534 sys 0m0.276s 00:06:41.534 11:30:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.534 11:30:10 -- common/autotest_common.sh@10 -- # set +x 00:06:41.534 ************************************ 00:06:41.534 END TEST accel_crc32c_C2 00:06:41.534 ************************************ 00:06:41.534 11:30:10 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:41.534 11:30:10 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:41.534 11:30:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.534 11:30:10 -- common/autotest_common.sh@10 -- # set +x 00:06:41.534 ************************************ 00:06:41.534 START TEST accel_copy 00:06:41.534 ************************************ 00:06:41.534 11:30:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:41.534 11:30:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.534 11:30:10 -- accel/accel.sh@17 -- # local accel_module 00:06:41.534 11:30:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:41.534 11:30:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:41.534 11:30:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.534 11:30:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.534 11:30:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.534 11:30:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.534 11:30:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.534 11:30:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.534 11:30:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.534 11:30:10 -- accel/accel.sh@42 -- # jq -r . 00:06:41.534 [2024-07-21 11:30:10.623608] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:41.534 [2024-07-21 11:30:10.623682] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185804 ] 00:06:41.534 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.534 [2024-07-21 11:30:10.708358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.534 [2024-07-21 11:30:10.743276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.905 11:30:11 -- accel/accel.sh@18 -- # out=' 00:06:42.905 SPDK Configuration: 00:06:42.905 Core mask: 0x1 00:06:42.905 00:06:42.905 Accel Perf Configuration: 00:06:42.905 Workload Type: copy 00:06:42.905 Transfer size: 4096 bytes 00:06:42.905 Vector count 1 00:06:42.905 Module: software 00:06:42.905 Queue depth: 32 00:06:42.905 Allocate depth: 32 00:06:42.905 # threads/core: 1 00:06:42.905 Run time: 1 seconds 00:06:42.905 Verify: Yes 00:06:42.905 00:06:42.905 Running for 1 seconds... 00:06:42.905 00:06:42.905 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.905 ------------------------------------------------------------------------------------ 00:06:42.905 0,0 447456/s 1747 MiB/s 0 0 00:06:42.905 ==================================================================================== 00:06:42.905 Total 447456/s 1747 MiB/s 0 0' 00:06:42.905 11:30:11 -- accel/accel.sh@20 -- # IFS=: 00:06:42.905 11:30:11 -- accel/accel.sh@20 -- # read -r var val 00:06:42.905 11:30:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:42.905 11:30:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.905 11:30:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:42.905 11:30:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.905 11:30:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.905 11:30:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.905 11:30:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.905 11:30:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.905 11:30:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.905 11:30:11 -- accel/accel.sh@42 -- # jq -r . 00:06:42.905 [2024-07-21 11:30:11.936346] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:42.905 [2024-07-21 11:30:11.936412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185958 ] 00:06:42.905 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.905 [2024-07-21 11:30:12.019235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.905 [2024-07-21 11:30:12.053569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.905 11:30:12 -- accel/accel.sh@21 -- # val= 00:06:42.905 11:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.905 11:30:12 -- accel/accel.sh@21 -- # val= 00:06:42.905 11:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.905 11:30:12 -- accel/accel.sh@21 -- # val=0x1 00:06:42.905 11:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.905 11:30:12 -- accel/accel.sh@21 -- # val= 00:06:42.905 11:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.905 11:30:12 -- accel/accel.sh@21 -- # val= 00:06:42.905 11:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.905 11:30:12 -- accel/accel.sh@21 -- # val=copy 00:06:42.905 11:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.905 11:30:12 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.905 11:30:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.905 11:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.905 11:30:12 -- accel/accel.sh@21 -- # val= 00:06:42.905 11:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.905 11:30:12 -- accel/accel.sh@21 -- # val=software 00:06:42.905 11:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.905 11:30:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.905 11:30:12 -- accel/accel.sh@21 -- # val=32 00:06:42.905 11:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.905 11:30:12 -- accel/accel.sh@21 -- # val=32 00:06:42.905 11:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.905 11:30:12 -- accel/accel.sh@21 -- # val=1 00:06:42.905 11:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.905 11:30:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.905 11:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.905 11:30:12 -- accel/accel.sh@21 -- # val=Yes 00:06:42.905 11:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.905 11:30:12 -- accel/accel.sh@21 -- # val= 00:06:42.905 11:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.905 11:30:12 -- accel/accel.sh@21 -- # val= 00:06:42.905 11:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.905 11:30:12 -- accel/accel.sh@20 -- # read -r var val 00:06:43.835 11:30:13 -- accel/accel.sh@21 -- # val= 00:06:43.835 11:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.835 11:30:13 -- accel/accel.sh@20 -- # IFS=: 00:06:43.835 11:30:13 -- accel/accel.sh@20 -- # read -r var val 00:06:43.835 11:30:13 -- accel/accel.sh@21 -- # val= 00:06:43.835 11:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.835 11:30:13 -- accel/accel.sh@20 -- # IFS=: 00:06:43.835 11:30:13 -- accel/accel.sh@20 -- # read -r var val 00:06:43.835 11:30:13 -- accel/accel.sh@21 -- # val= 00:06:43.835 11:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.835 11:30:13 -- accel/accel.sh@20 -- # IFS=: 00:06:43.835 11:30:13 -- accel/accel.sh@20 -- # read -r var val 00:06:43.835 11:30:13 -- accel/accel.sh@21 -- # val= 00:06:43.835 11:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.835 11:30:13 -- accel/accel.sh@20 -- # IFS=: 00:06:43.835 11:30:13 -- accel/accel.sh@20 -- # read -r var val 00:06:43.835 11:30:13 -- accel/accel.sh@21 -- # val= 00:06:43.835 11:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.835 11:30:13 -- accel/accel.sh@20 -- # IFS=: 00:06:43.835 11:30:13 -- accel/accel.sh@20 -- # read -r var val 00:06:43.835 11:30:13 -- accel/accel.sh@21 -- # val= 00:06:43.835 11:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.835 11:30:13 -- accel/accel.sh@20 -- # IFS=: 00:06:43.835 11:30:13 -- accel/accel.sh@20 -- # read -r var val 00:06:43.835 11:30:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.835 11:30:13 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:43.835 11:30:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.835 00:06:43.835 real 0m2.629s 00:06:43.835 user 0m2.355s 00:06:43.835 sys 0m0.283s 00:06:43.835 11:30:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.835 11:30:13 -- common/autotest_common.sh@10 -- # set +x 00:06:43.835 ************************************ 00:06:43.835 END TEST accel_copy 00:06:43.835 ************************************ 00:06:44.093 11:30:13 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.093 11:30:13 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:44.093 11:30:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.093 11:30:13 -- common/autotest_common.sh@10 -- # set +x 00:06:44.093 ************************************ 00:06:44.093 START TEST accel_fill 00:06:44.093 ************************************ 00:06:44.093 11:30:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.093 11:30:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.093 11:30:13 -- accel/accel.sh@17 -- # local accel_module 00:06:44.093 11:30:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.093 11:30:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.093 11:30:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.093 11:30:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.093 11:30:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.093 11:30:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.093 11:30:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.093 11:30:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.093 11:30:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.093 11:30:13 -- accel/accel.sh@42 -- # jq -r . 00:06:44.093 [2024-07-21 11:30:13.295956] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:44.093 [2024-07-21 11:30:13.296028] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186191 ] 00:06:44.093 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.093 [2024-07-21 11:30:13.382650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.093 [2024-07-21 11:30:13.417423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.463 11:30:14 -- accel/accel.sh@18 -- # out=' 00:06:45.463 SPDK Configuration: 00:06:45.463 Core mask: 0x1 00:06:45.463 00:06:45.463 Accel Perf Configuration: 00:06:45.463 Workload Type: fill 00:06:45.463 Fill pattern: 0x80 00:06:45.463 Transfer size: 4096 bytes 00:06:45.463 Vector count 1 00:06:45.463 Module: software 00:06:45.463 Queue depth: 64 00:06:45.463 Allocate depth: 64 00:06:45.463 # threads/core: 1 00:06:45.463 Run time: 1 seconds 00:06:45.463 Verify: Yes 00:06:45.463 00:06:45.463 Running for 1 seconds... 00:06:45.463 00:06:45.463 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.463 ------------------------------------------------------------------------------------ 00:06:45.463 0,0 681920/s 2663 MiB/s 0 0 00:06:45.463 ==================================================================================== 00:06:45.463 Total 681920/s 2663 MiB/s 0 0' 00:06:45.463 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.463 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:45.463 11:30:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:45.463 11:30:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:45.463 11:30:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.463 11:30:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.463 11:30:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.463 11:30:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.463 11:30:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.463 11:30:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.463 11:30:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.464 11:30:14 -- accel/accel.sh@42 -- # jq -r . 00:06:45.464 [2024-07-21 11:30:14.612772] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:45.464 [2024-07-21 11:30:14.612838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186457 ] 00:06:45.464 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.464 [2024-07-21 11:30:14.696157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.464 [2024-07-21 11:30:14.730531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.464 11:30:14 -- accel/accel.sh@21 -- # val= 00:06:45.464 11:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:45.464 11:30:14 -- accel/accel.sh@21 -- # val= 00:06:45.464 11:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:45.464 11:30:14 -- accel/accel.sh@21 -- # val=0x1 00:06:45.464 11:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:45.464 11:30:14 -- accel/accel.sh@21 -- # val= 00:06:45.464 11:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:45.464 11:30:14 -- accel/accel.sh@21 -- # val= 00:06:45.464 11:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:45.464 11:30:14 -- accel/accel.sh@21 -- # val=fill 00:06:45.464 11:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.464 11:30:14 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:45.464 11:30:14 -- accel/accel.sh@21 -- # val=0x80 00:06:45.464 11:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:45.464 11:30:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.464 11:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:45.464 11:30:14 -- accel/accel.sh@21 -- # val= 00:06:45.464 11:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:45.464 11:30:14 -- accel/accel.sh@21 -- # val=software 00:06:45.464 11:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.464 11:30:14 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:45.464 11:30:14 -- accel/accel.sh@21 -- # val=64 00:06:45.464 11:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:45.464 11:30:14 -- accel/accel.sh@21 -- # val=64 00:06:45.464 11:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:45.464 11:30:14 -- accel/accel.sh@21 -- # val=1 00:06:45.464 11:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:45.464 11:30:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.464 11:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:45.464 11:30:14 -- accel/accel.sh@21 -- # val=Yes 00:06:45.464 11:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:45.464 11:30:14 -- accel/accel.sh@21 -- # val= 00:06:45.464 11:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:45.464 11:30:14 -- accel/accel.sh@21 -- # val= 00:06:45.464 11:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 11:30:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.836 11:30:15 -- accel/accel.sh@21 -- # val= 00:06:46.836 11:30:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.836 11:30:15 -- accel/accel.sh@20 -- # IFS=: 00:06:46.836 11:30:15 -- accel/accel.sh@20 -- # read -r var val 00:06:46.836 11:30:15 -- accel/accel.sh@21 -- # val= 00:06:46.836 11:30:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.836 11:30:15 -- accel/accel.sh@20 -- # IFS=: 00:06:46.836 11:30:15 -- accel/accel.sh@20 -- # read -r var val 00:06:46.836 11:30:15 -- accel/accel.sh@21 -- # val= 00:06:46.836 11:30:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.836 11:30:15 -- accel/accel.sh@20 -- # IFS=: 00:06:46.836 11:30:15 -- accel/accel.sh@20 -- # read -r var val 00:06:46.836 11:30:15 -- accel/accel.sh@21 -- # val= 00:06:46.836 11:30:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.836 11:30:15 -- accel/accel.sh@20 -- # IFS=: 00:06:46.836 11:30:15 -- accel/accel.sh@20 -- # read -r var val 00:06:46.836 11:30:15 -- accel/accel.sh@21 -- # val= 00:06:46.836 11:30:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.836 11:30:15 -- accel/accel.sh@20 -- # IFS=: 00:06:46.836 11:30:15 -- accel/accel.sh@20 -- # read -r var val 00:06:46.836 11:30:15 -- accel/accel.sh@21 -- # val= 00:06:46.836 11:30:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.836 11:30:15 -- accel/accel.sh@20 -- # IFS=: 00:06:46.836 11:30:15 -- accel/accel.sh@20 -- # read -r var val 00:06:46.836 11:30:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.836 11:30:15 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:46.836 11:30:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.836 00:06:46.836 real 0m2.633s 00:06:46.836 user 0m2.357s 00:06:46.836 sys 0m0.285s 00:06:46.836 11:30:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.836 11:30:15 -- common/autotest_common.sh@10 -- # set +x 00:06:46.836 ************************************ 00:06:46.836 END TEST accel_fill 00:06:46.836 ************************************ 00:06:46.836 11:30:15 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:46.836 11:30:15 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:46.836 11:30:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.836 11:30:15 -- common/autotest_common.sh@10 -- # set +x 00:06:46.836 ************************************ 00:06:46.836 START TEST accel_copy_crc32c 00:06:46.836 ************************************ 00:06:46.836 11:30:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:46.836 11:30:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.836 11:30:15 -- accel/accel.sh@17 -- # local accel_module 00:06:46.836 11:30:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:46.837 11:30:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:46.837 11:30:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.837 11:30:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.837 11:30:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.837 11:30:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.837 11:30:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.837 11:30:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.837 11:30:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.837 11:30:15 -- accel/accel.sh@42 -- # jq -r . 00:06:46.837 [2024-07-21 11:30:15.973587] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:46.837 [2024-07-21 11:30:15.973674] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186742 ] 00:06:46.837 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.837 [2024-07-21 11:30:16.058129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.837 [2024-07-21 11:30:16.092669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.207 11:30:17 -- accel/accel.sh@18 -- # out=' 00:06:48.207 SPDK Configuration: 00:06:48.207 Core mask: 0x1 00:06:48.207 00:06:48.207 Accel Perf Configuration: 00:06:48.207 Workload Type: copy_crc32c 00:06:48.207 CRC-32C seed: 0 00:06:48.207 Vector size: 4096 bytes 00:06:48.207 Transfer size: 4096 bytes 00:06:48.207 Vector count 1 00:06:48.207 Module: software 00:06:48.207 Queue depth: 32 00:06:48.207 Allocate depth: 32 00:06:48.207 # threads/core: 1 00:06:48.207 Run time: 1 seconds 00:06:48.207 Verify: Yes 00:06:48.207 00:06:48.207 Running for 1 seconds... 00:06:48.207 00:06:48.207 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.207 ------------------------------------------------------------------------------------ 00:06:48.207 0,0 341376/s 1333 MiB/s 0 0 00:06:48.207 ==================================================================================== 00:06:48.207 Total 341376/s 1333 MiB/s 0 0' 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.207 11:30:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:48.207 11:30:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.207 11:30:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.207 11:30:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.207 11:30:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:48.207 11:30:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.207 11:30:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.207 11:30:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.207 11:30:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.207 11:30:17 -- accel/accel.sh@42 -- # jq -r . 00:06:48.207 [2024-07-21 11:30:17.283003] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:48.207 [2024-07-21 11:30:17.283069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187017 ] 00:06:48.207 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.207 [2024-07-21 11:30:17.366102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.207 [2024-07-21 11:30:17.400278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.207 11:30:17 -- accel/accel.sh@21 -- # val= 00:06:48.207 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.207 11:30:17 -- accel/accel.sh@21 -- # val= 00:06:48.207 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.207 11:30:17 -- accel/accel.sh@21 -- # val=0x1 00:06:48.207 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.207 11:30:17 -- accel/accel.sh@21 -- # val= 00:06:48.207 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.207 11:30:17 -- accel/accel.sh@21 -- # val= 00:06:48.207 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.207 11:30:17 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:48.207 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.207 11:30:17 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.207 11:30:17 -- accel/accel.sh@21 -- # val=0 00:06:48.207 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.207 11:30:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.207 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.207 11:30:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.207 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.207 11:30:17 -- accel/accel.sh@21 -- # val= 00:06:48.207 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.207 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.207 11:30:17 -- accel/accel.sh@21 -- # val=software 00:06:48.207 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.208 11:30:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.208 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.208 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.208 11:30:17 -- accel/accel.sh@21 -- # val=32 00:06:48.208 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.208 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.208 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.208 11:30:17 -- accel/accel.sh@21 -- # val=32 00:06:48.208 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.208 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.208 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.208 11:30:17 -- accel/accel.sh@21 -- # val=1 00:06:48.208 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.208 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.208 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.208 11:30:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.208 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.208 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.208 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.208 11:30:17 -- accel/accel.sh@21 -- # val=Yes 00:06:48.208 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.208 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.208 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.208 11:30:17 -- accel/accel.sh@21 -- # val= 00:06:48.208 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.208 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.208 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.208 11:30:17 -- accel/accel.sh@21 -- # val= 00:06:48.208 11:30:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.208 11:30:17 -- accel/accel.sh@20 -- # IFS=: 00:06:48.208 11:30:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.578 11:30:18 -- accel/accel.sh@21 -- # val= 00:06:49.578 11:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.578 11:30:18 -- accel/accel.sh@20 -- # IFS=: 00:06:49.578 11:30:18 -- accel/accel.sh@20 -- # read -r var val 00:06:49.578 11:30:18 -- accel/accel.sh@21 -- # val= 00:06:49.578 11:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.578 11:30:18 -- accel/accel.sh@20 -- # IFS=: 00:06:49.578 11:30:18 -- accel/accel.sh@20 -- # read -r var val 00:06:49.578 11:30:18 -- accel/accel.sh@21 -- # val= 00:06:49.578 11:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.578 11:30:18 -- accel/accel.sh@20 -- # IFS=: 00:06:49.578 11:30:18 -- accel/accel.sh@20 -- # read -r var val 00:06:49.578 11:30:18 -- accel/accel.sh@21 -- # val= 00:06:49.578 11:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.578 11:30:18 -- accel/accel.sh@20 -- # IFS=: 00:06:49.578 11:30:18 -- accel/accel.sh@20 -- # read -r var val 00:06:49.578 11:30:18 -- accel/accel.sh@21 -- # val= 00:06:49.578 11:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.578 11:30:18 -- accel/accel.sh@20 -- # IFS=: 00:06:49.578 11:30:18 -- accel/accel.sh@20 -- # read -r var val 00:06:49.578 11:30:18 -- accel/accel.sh@21 -- # val= 00:06:49.578 11:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.578 11:30:18 -- accel/accel.sh@20 -- # IFS=: 00:06:49.578 11:30:18 -- accel/accel.sh@20 -- # read -r var val 00:06:49.578 11:30:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.579 11:30:18 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:49.579 11:30:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.579 00:06:49.579 real 0m2.626s 00:06:49.579 user 0m2.366s 00:06:49.579 sys 0m0.269s 00:06:49.579 11:30:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.579 11:30:18 -- common/autotest_common.sh@10 -- # set +x 00:06:49.579 ************************************ 00:06:49.579 END TEST accel_copy_crc32c 00:06:49.579 ************************************ 00:06:49.579 11:30:18 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:49.579 11:30:18 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:49.579 11:30:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.579 11:30:18 -- common/autotest_common.sh@10 -- # set +x 00:06:49.579 ************************************ 00:06:49.579 START TEST accel_copy_crc32c_C2 00:06:49.579 ************************************ 00:06:49.579 11:30:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:49.579 11:30:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.579 11:30:18 -- accel/accel.sh@17 -- # local accel_module 00:06:49.579 11:30:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:49.579 11:30:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:49.579 11:30:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.579 11:30:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.579 11:30:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.579 11:30:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.579 11:30:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.579 11:30:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.579 11:30:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.579 11:30:18 -- accel/accel.sh@42 -- # jq -r . 00:06:49.579 [2024-07-21 11:30:18.644506] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:49.579 [2024-07-21 11:30:18.644570] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187298 ] 00:06:49.579 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.579 [2024-07-21 11:30:18.728920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.579 [2024-07-21 11:30:18.763820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.946 11:30:19 -- accel/accel.sh@18 -- # out=' 00:06:50.946 SPDK Configuration: 00:06:50.946 Core mask: 0x1 00:06:50.946 00:06:50.946 Accel Perf Configuration: 00:06:50.946 Workload Type: copy_crc32c 00:06:50.946 CRC-32C seed: 0 00:06:50.946 Vector size: 4096 bytes 00:06:50.946 Transfer size: 8192 bytes 00:06:50.946 Vector count 2 00:06:50.946 Module: software 00:06:50.946 Queue depth: 32 00:06:50.946 Allocate depth: 32 00:06:50.946 # threads/core: 1 00:06:50.946 Run time: 1 seconds 00:06:50.946 Verify: Yes 00:06:50.946 00:06:50.946 Running for 1 seconds... 00:06:50.946 00:06:50.946 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.946 ------------------------------------------------------------------------------------ 00:06:50.946 0,0 247776/s 1935 MiB/s 0 0 00:06:50.946 ==================================================================================== 00:06:50.946 Total 247776/s 967 MiB/s 0 0' 00:06:50.946 11:30:19 -- accel/accel.sh@20 -- # IFS=: 00:06:50.946 11:30:19 -- accel/accel.sh@20 -- # read -r var val 00:06:50.946 11:30:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:50.946 11:30:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.946 11:30:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.947 11:30:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:50.947 11:30:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.947 11:30:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.947 11:30:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.947 11:30:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.947 11:30:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.947 11:30:19 -- accel/accel.sh@42 -- # jq -r . 00:06:50.947 [2024-07-21 11:30:19.958059] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:50.947 [2024-07-21 11:30:19.958125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187516 ] 00:06:50.947 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.947 [2024-07-21 11:30:20.045075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.947 [2024-07-21 11:30:20.083962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val= 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val= 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val=0x1 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val= 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val= 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val=0 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val= 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val=software 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val=32 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val=32 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val=1 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val=Yes 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val= 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.947 11:30:20 -- accel/accel.sh@21 -- # val= 00:06:50.947 11:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.947 11:30:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.880 11:30:21 -- accel/accel.sh@21 -- # val= 00:06:51.880 11:30:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.880 11:30:21 -- accel/accel.sh@20 -- # IFS=: 00:06:51.880 11:30:21 -- accel/accel.sh@20 -- # read -r var val 00:06:51.880 11:30:21 -- accel/accel.sh@21 -- # val= 00:06:51.881 11:30:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.881 11:30:21 -- accel/accel.sh@20 -- # IFS=: 00:06:51.881 11:30:21 -- accel/accel.sh@20 -- # read -r var val 00:06:51.881 11:30:21 -- accel/accel.sh@21 -- # val= 00:06:51.881 11:30:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.881 11:30:21 -- accel/accel.sh@20 -- # IFS=: 00:06:51.881 11:30:21 -- accel/accel.sh@20 -- # read -r var val 00:06:51.881 11:30:21 -- accel/accel.sh@21 -- # val= 00:06:51.881 11:30:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.881 11:30:21 -- accel/accel.sh@20 -- # IFS=: 00:06:51.881 11:30:21 -- accel/accel.sh@20 -- # read -r var val 00:06:51.881 11:30:21 -- accel/accel.sh@21 -- # val= 00:06:51.881 11:30:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.881 11:30:21 -- accel/accel.sh@20 -- # IFS=: 00:06:51.881 11:30:21 -- accel/accel.sh@20 -- # read -r var val 00:06:51.881 11:30:21 -- accel/accel.sh@21 -- # val= 00:06:51.881 11:30:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.881 11:30:21 -- accel/accel.sh@20 -- # IFS=: 00:06:51.881 11:30:21 -- accel/accel.sh@20 -- # read -r var val 00:06:51.881 11:30:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.881 11:30:21 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:51.881 11:30:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.881 00:06:51.881 real 0m2.639s 00:06:51.881 user 0m2.352s 00:06:51.881 sys 0m0.298s 00:06:51.881 11:30:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.881 11:30:21 -- common/autotest_common.sh@10 -- # set +x 00:06:51.881 ************************************ 00:06:51.881 END TEST accel_copy_crc32c_C2 00:06:51.881 ************************************ 00:06:51.881 11:30:21 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:51.881 11:30:21 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:51.881 11:30:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.881 11:30:21 -- common/autotest_common.sh@10 -- # set +x 00:06:52.138 ************************************ 00:06:52.138 START TEST accel_dualcast 00:06:52.138 ************************************ 00:06:52.138 11:30:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:52.138 11:30:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.138 11:30:21 -- accel/accel.sh@17 -- # local accel_module 00:06:52.138 11:30:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:52.138 11:30:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:52.138 11:30:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.138 11:30:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.138 11:30:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.138 11:30:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.138 11:30:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.138 11:30:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.138 11:30:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.138 11:30:21 -- accel/accel.sh@42 -- # jq -r . 00:06:52.138 [2024-07-21 11:30:21.332185] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:52.138 [2024-07-21 11:30:21.332271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187719 ] 00:06:52.138 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.138 [2024-07-21 11:30:21.418737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.138 [2024-07-21 11:30:21.454440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.534 11:30:22 -- accel/accel.sh@18 -- # out=' 00:06:53.534 SPDK Configuration: 00:06:53.534 Core mask: 0x1 00:06:53.534 00:06:53.534 Accel Perf Configuration: 00:06:53.534 Workload Type: dualcast 00:06:53.534 Transfer size: 4096 bytes 00:06:53.534 Vector count 1 00:06:53.534 Module: software 00:06:53.534 Queue depth: 32 00:06:53.535 Allocate depth: 32 00:06:53.535 # threads/core: 1 00:06:53.535 Run time: 1 seconds 00:06:53.535 Verify: Yes 00:06:53.535 00:06:53.535 Running for 1 seconds... 00:06:53.535 00:06:53.535 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.535 ------------------------------------------------------------------------------------ 00:06:53.535 0,0 533824/s 2085 MiB/s 0 0 00:06:53.535 ==================================================================================== 00:06:53.535 Total 533824/s 2085 MiB/s 0 0' 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # IFS=: 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # read -r var val 00:06:53.535 11:30:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:53.535 11:30:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:53.535 11:30:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.535 11:30:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.535 11:30:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.535 11:30:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.535 11:30:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.535 11:30:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.535 11:30:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.535 11:30:22 -- accel/accel.sh@42 -- # jq -r . 00:06:53.535 [2024-07-21 11:30:22.648487] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:53.535 [2024-07-21 11:30:22.648556] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187886 ] 00:06:53.535 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.535 [2024-07-21 11:30:22.732552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.535 [2024-07-21 11:30:22.767184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.535 11:30:22 -- accel/accel.sh@21 -- # val= 00:06:53.535 11:30:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # IFS=: 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # read -r var val 00:06:53.535 11:30:22 -- accel/accel.sh@21 -- # val= 00:06:53.535 11:30:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # IFS=: 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # read -r var val 00:06:53.535 11:30:22 -- accel/accel.sh@21 -- # val=0x1 00:06:53.535 11:30:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # IFS=: 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # read -r var val 00:06:53.535 11:30:22 -- accel/accel.sh@21 -- # val= 00:06:53.535 11:30:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # IFS=: 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # read -r var val 00:06:53.535 11:30:22 -- accel/accel.sh@21 -- # val= 00:06:53.535 11:30:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # IFS=: 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # read -r var val 00:06:53.535 11:30:22 -- accel/accel.sh@21 -- # val=dualcast 00:06:53.535 11:30:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.535 11:30:22 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # IFS=: 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # read -r var val 00:06:53.535 11:30:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.535 11:30:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # IFS=: 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # read -r var val 00:06:53.535 11:30:22 -- accel/accel.sh@21 -- # val= 00:06:53.535 11:30:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # IFS=: 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # read -r var val 00:06:53.535 11:30:22 -- accel/accel.sh@21 -- # val=software 00:06:53.535 11:30:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.535 11:30:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # IFS=: 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # read -r var val 00:06:53.535 11:30:22 -- accel/accel.sh@21 -- # val=32 00:06:53.535 11:30:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # IFS=: 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # read -r var val 00:06:53.535 11:30:22 -- accel/accel.sh@21 -- # val=32 00:06:53.535 11:30:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # IFS=: 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # read -r var val 00:06:53.535 11:30:22 -- accel/accel.sh@21 -- # val=1 00:06:53.535 11:30:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # IFS=: 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # read -r var val 00:06:53.535 11:30:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.535 11:30:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # IFS=: 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # read -r var val 00:06:53.535 11:30:22 -- accel/accel.sh@21 -- # val=Yes 00:06:53.535 11:30:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # IFS=: 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # read -r var val 00:06:53.535 11:30:22 -- accel/accel.sh@21 -- # val= 00:06:53.535 11:30:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # IFS=: 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # read -r var val 00:06:53.535 11:30:22 -- accel/accel.sh@21 -- # val= 00:06:53.535 11:30:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # IFS=: 00:06:53.535 11:30:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.920 11:30:23 -- accel/accel.sh@21 -- # val= 00:06:54.920 11:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.920 11:30:23 -- accel/accel.sh@20 -- # IFS=: 00:06:54.920 11:30:23 -- accel/accel.sh@20 -- # read -r var val 00:06:54.920 11:30:23 -- accel/accel.sh@21 -- # val= 00:06:54.920 11:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.920 11:30:23 -- accel/accel.sh@20 -- # IFS=: 00:06:54.920 11:30:23 -- accel/accel.sh@20 -- # read -r var val 00:06:54.920 11:30:23 -- accel/accel.sh@21 -- # val= 00:06:54.920 11:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.920 11:30:23 -- accel/accel.sh@20 -- # IFS=: 00:06:54.920 11:30:23 -- accel/accel.sh@20 -- # read -r var val 00:06:54.920 11:30:23 -- accel/accel.sh@21 -- # val= 00:06:54.920 11:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.920 11:30:23 -- accel/accel.sh@20 -- # IFS=: 00:06:54.920 11:30:23 -- accel/accel.sh@20 -- # read -r var val 00:06:54.920 11:30:23 -- accel/accel.sh@21 -- # val= 00:06:54.920 11:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.920 11:30:23 -- accel/accel.sh@20 -- # IFS=: 00:06:54.920 11:30:23 -- accel/accel.sh@20 -- # read -r var val 00:06:54.920 11:30:23 -- accel/accel.sh@21 -- # val= 00:06:54.920 11:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.920 11:30:23 -- accel/accel.sh@20 -- # IFS=: 00:06:54.920 11:30:23 -- accel/accel.sh@20 -- # read -r var val 00:06:54.920 11:30:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.920 11:30:23 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:54.920 11:30:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.920 00:06:54.920 real 0m2.635s 00:06:54.920 user 0m2.360s 00:06:54.920 sys 0m0.282s 00:06:54.920 11:30:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.920 11:30:23 -- common/autotest_common.sh@10 -- # set +x 00:06:54.920 ************************************ 00:06:54.920 END TEST accel_dualcast 00:06:54.920 ************************************ 00:06:54.920 11:30:23 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:54.920 11:30:23 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:54.920 11:30:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.920 11:30:23 -- common/autotest_common.sh@10 -- # set +x 00:06:54.920 ************************************ 00:06:54.920 START TEST accel_compare 00:06:54.920 ************************************ 00:06:54.920 11:30:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:54.920 11:30:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.920 11:30:23 -- accel/accel.sh@17 -- # local accel_module 00:06:54.920 11:30:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:54.920 11:30:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:54.920 11:30:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.920 11:30:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.920 11:30:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.920 11:30:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.920 11:30:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.920 11:30:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.920 11:30:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.920 11:30:23 -- accel/accel.sh@42 -- # jq -r . 00:06:54.920 [2024-07-21 11:30:24.013273] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:54.920 [2024-07-21 11:30:24.013343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2188159 ] 00:06:54.920 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.920 [2024-07-21 11:30:24.099552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.920 [2024-07-21 11:30:24.134639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.305 11:30:25 -- accel/accel.sh@18 -- # out=' 00:06:56.305 SPDK Configuration: 00:06:56.305 Core mask: 0x1 00:06:56.305 00:06:56.305 Accel Perf Configuration: 00:06:56.305 Workload Type: compare 00:06:56.305 Transfer size: 4096 bytes 00:06:56.305 Vector count 1 00:06:56.305 Module: software 00:06:56.305 Queue depth: 32 00:06:56.305 Allocate depth: 32 00:06:56.305 # threads/core: 1 00:06:56.305 Run time: 1 seconds 00:06:56.305 Verify: Yes 00:06:56.305 00:06:56.305 Running for 1 seconds... 00:06:56.305 00:06:56.305 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.305 ------------------------------------------------------------------------------------ 00:06:56.305 0,0 641600/s 2506 MiB/s 0 0 00:06:56.305 ==================================================================================== 00:06:56.305 Total 641600/s 2506 MiB/s 0 0' 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # IFS=: 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.305 11:30:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:56.305 11:30:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.305 11:30:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.305 11:30:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.305 11:30:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:56.305 11:30:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.305 11:30:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.305 11:30:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.305 11:30:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.305 11:30:25 -- accel/accel.sh@42 -- # jq -r . 00:06:56.305 [2024-07-21 11:30:25.313588] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:56.305 [2024-07-21 11:30:25.313648] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2188430 ] 00:06:56.305 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.305 [2024-07-21 11:30:25.394765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.305 [2024-07-21 11:30:25.429488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.305 11:30:25 -- accel/accel.sh@21 -- # val= 00:06:56.305 11:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # IFS=: 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.305 11:30:25 -- accel/accel.sh@21 -- # val= 00:06:56.305 11:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # IFS=: 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.305 11:30:25 -- accel/accel.sh@21 -- # val=0x1 00:06:56.305 11:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # IFS=: 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.305 11:30:25 -- accel/accel.sh@21 -- # val= 00:06:56.305 11:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # IFS=: 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.305 11:30:25 -- accel/accel.sh@21 -- # val= 00:06:56.305 11:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # IFS=: 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.305 11:30:25 -- accel/accel.sh@21 -- # val=compare 00:06:56.305 11:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.305 11:30:25 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # IFS=: 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.305 11:30:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.305 11:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # IFS=: 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.305 11:30:25 -- accel/accel.sh@21 -- # val= 00:06:56.305 11:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # IFS=: 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.305 11:30:25 -- accel/accel.sh@21 -- # val=software 00:06:56.305 11:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.305 11:30:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # IFS=: 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.305 11:30:25 -- accel/accel.sh@21 -- # val=32 00:06:56.305 11:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # IFS=: 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.305 11:30:25 -- accel/accel.sh@21 -- # val=32 00:06:56.305 11:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # IFS=: 00:06:56.305 11:30:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.305 11:30:25 -- accel/accel.sh@21 -- # val=1 00:06:56.306 11:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.306 11:30:25 -- accel/accel.sh@20 -- # IFS=: 00:06:56.306 11:30:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.306 11:30:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.306 11:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.306 11:30:25 -- accel/accel.sh@20 -- # IFS=: 00:06:56.306 11:30:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.306 11:30:25 -- accel/accel.sh@21 -- # val=Yes 00:06:56.306 11:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.306 11:30:25 -- accel/accel.sh@20 -- # IFS=: 00:06:56.306 11:30:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.306 11:30:25 -- accel/accel.sh@21 -- # val= 00:06:56.306 11:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.306 11:30:25 -- accel/accel.sh@20 -- # IFS=: 00:06:56.306 11:30:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.306 11:30:25 -- accel/accel.sh@21 -- # val= 00:06:56.306 11:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.306 11:30:25 -- accel/accel.sh@20 -- # IFS=: 00:06:56.306 11:30:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.237 11:30:26 -- accel/accel.sh@21 -- # val= 00:06:57.237 11:30:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.237 11:30:26 -- accel/accel.sh@20 -- # IFS=: 00:06:57.237 11:30:26 -- accel/accel.sh@20 -- # read -r var val 00:06:57.237 11:30:26 -- accel/accel.sh@21 -- # val= 00:06:57.237 11:30:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.237 11:30:26 -- accel/accel.sh@20 -- # IFS=: 00:06:57.237 11:30:26 -- accel/accel.sh@20 -- # read -r var val 00:06:57.237 11:30:26 -- accel/accel.sh@21 -- # val= 00:06:57.237 11:30:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.237 11:30:26 -- accel/accel.sh@20 -- # IFS=: 00:06:57.237 11:30:26 -- accel/accel.sh@20 -- # read -r var val 00:06:57.237 11:30:26 -- accel/accel.sh@21 -- # val= 00:06:57.237 11:30:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.237 11:30:26 -- accel/accel.sh@20 -- # IFS=: 00:06:57.237 11:30:26 -- accel/accel.sh@20 -- # read -r var val 00:06:57.237 11:30:26 -- accel/accel.sh@21 -- # val= 00:06:57.237 11:30:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.237 11:30:26 -- accel/accel.sh@20 -- # IFS=: 00:06:57.237 11:30:26 -- accel/accel.sh@20 -- # read -r var val 00:06:57.237 11:30:26 -- accel/accel.sh@21 -- # val= 00:06:57.237 11:30:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.237 11:30:26 -- accel/accel.sh@20 -- # IFS=: 00:06:57.237 11:30:26 -- accel/accel.sh@20 -- # read -r var val 00:06:57.237 11:30:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.237 11:30:26 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:57.237 11:30:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.237 00:06:57.237 real 0m2.616s 00:06:57.237 user 0m2.354s 00:06:57.238 sys 0m0.270s 00:06:57.238 11:30:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.238 11:30:26 -- common/autotest_common.sh@10 -- # set +x 00:06:57.238 ************************************ 00:06:57.238 END TEST accel_compare 00:06:57.238 ************************************ 00:06:57.238 11:30:26 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:57.238 11:30:26 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:57.238 11:30:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.238 11:30:26 -- common/autotest_common.sh@10 -- # set +x 00:06:57.238 ************************************ 00:06:57.238 START TEST accel_xor 00:06:57.238 ************************************ 00:06:57.238 11:30:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:57.238 11:30:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.238 11:30:26 -- accel/accel.sh@17 -- # local accel_module 00:06:57.238 11:30:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:57.238 11:30:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:57.238 11:30:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.238 11:30:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.238 11:30:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.238 11:30:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.238 11:30:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.238 11:30:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.238 11:30:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.238 11:30:26 -- accel/accel.sh@42 -- # jq -r . 00:06:57.495 [2024-07-21 11:30:26.673044] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:57.495 [2024-07-21 11:30:26.673110] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2188718 ] 00:06:57.495 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.495 [2024-07-21 11:30:26.756191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.495 [2024-07-21 11:30:26.791100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.865 11:30:27 -- accel/accel.sh@18 -- # out=' 00:06:58.865 SPDK Configuration: 00:06:58.865 Core mask: 0x1 00:06:58.865 00:06:58.865 Accel Perf Configuration: 00:06:58.865 Workload Type: xor 00:06:58.865 Source buffers: 2 00:06:58.865 Transfer size: 4096 bytes 00:06:58.865 Vector count 1 00:06:58.865 Module: software 00:06:58.865 Queue depth: 32 00:06:58.865 Allocate depth: 32 00:06:58.865 # threads/core: 1 00:06:58.865 Run time: 1 seconds 00:06:58.865 Verify: Yes 00:06:58.865 00:06:58.865 Running for 1 seconds... 00:06:58.865 00:06:58.865 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.865 ------------------------------------------------------------------------------------ 00:06:58.865 0,0 493120/s 1926 MiB/s 0 0 00:06:58.865 ==================================================================================== 00:06:58.865 Total 493120/s 1926 MiB/s 0 0' 00:06:58.865 11:30:27 -- accel/accel.sh@20 -- # IFS=: 00:06:58.865 11:30:27 -- accel/accel.sh@20 -- # read -r var val 00:06:58.865 11:30:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:58.865 11:30:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.865 11:30:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.865 11:30:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.865 11:30:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:58.865 11:30:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.865 11:30:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.865 11:30:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.865 11:30:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.865 11:30:27 -- accel/accel.sh@42 -- # jq -r . 00:06:58.865 [2024-07-21 11:30:27.984523] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:58.865 [2024-07-21 11:30:27.984611] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2188986 ] 00:06:58.865 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.865 [2024-07-21 11:30:28.069612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.865 [2024-07-21 11:30:28.103631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.865 11:30:28 -- accel/accel.sh@21 -- # val= 00:06:58.865 11:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.865 11:30:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.865 11:30:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.865 11:30:28 -- accel/accel.sh@21 -- # val= 00:06:58.865 11:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.865 11:30:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.865 11:30:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.865 11:30:28 -- accel/accel.sh@21 -- # val=0x1 00:06:58.865 11:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.865 11:30:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.865 11:30:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.865 11:30:28 -- accel/accel.sh@21 -- # val= 00:06:58.865 11:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.865 11:30:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.865 11:30:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.865 11:30:28 -- accel/accel.sh@21 -- # val= 00:06:58.865 11:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.865 11:30:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.865 11:30:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.866 11:30:28 -- accel/accel.sh@21 -- # val=xor 00:06:58.866 11:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.866 11:30:28 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.866 11:30:28 -- accel/accel.sh@21 -- # val=2 00:06:58.866 11:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.866 11:30:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.866 11:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.866 11:30:28 -- accel/accel.sh@21 -- # val= 00:06:58.866 11:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.866 11:30:28 -- accel/accel.sh@21 -- # val=software 00:06:58.866 11:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.866 11:30:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.866 11:30:28 -- accel/accel.sh@21 -- # val=32 00:06:58.866 11:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.866 11:30:28 -- accel/accel.sh@21 -- # val=32 00:06:58.866 11:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.866 11:30:28 -- accel/accel.sh@21 -- # val=1 00:06:58.866 11:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.866 11:30:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.866 11:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.866 11:30:28 -- accel/accel.sh@21 -- # val=Yes 00:06:58.866 11:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.866 11:30:28 -- accel/accel.sh@21 -- # val= 00:06:58.866 11:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # read -r var val 00:06:58.866 11:30:28 -- accel/accel.sh@21 -- # val= 00:06:58.866 11:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # IFS=: 00:06:58.866 11:30:28 -- accel/accel.sh@20 -- # read -r var val 00:07:00.241 11:30:29 -- accel/accel.sh@21 -- # val= 00:07:00.241 11:30:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.241 11:30:29 -- accel/accel.sh@20 -- # IFS=: 00:07:00.241 11:30:29 -- accel/accel.sh@20 -- # read -r var val 00:07:00.241 11:30:29 -- accel/accel.sh@21 -- # val= 00:07:00.241 11:30:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.241 11:30:29 -- accel/accel.sh@20 -- # IFS=: 00:07:00.241 11:30:29 -- accel/accel.sh@20 -- # read -r var val 00:07:00.241 11:30:29 -- accel/accel.sh@21 -- # val= 00:07:00.241 11:30:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.241 11:30:29 -- accel/accel.sh@20 -- # IFS=: 00:07:00.241 11:30:29 -- accel/accel.sh@20 -- # read -r var val 00:07:00.241 11:30:29 -- accel/accel.sh@21 -- # val= 00:07:00.241 11:30:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.241 11:30:29 -- accel/accel.sh@20 -- # IFS=: 00:07:00.241 11:30:29 -- accel/accel.sh@20 -- # read -r var val 00:07:00.241 11:30:29 -- accel/accel.sh@21 -- # val= 00:07:00.241 11:30:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.241 11:30:29 -- accel/accel.sh@20 -- # IFS=: 00:07:00.241 11:30:29 -- accel/accel.sh@20 -- # read -r var val 00:07:00.241 11:30:29 -- accel/accel.sh@21 -- # val= 00:07:00.241 11:30:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.241 11:30:29 -- accel/accel.sh@20 -- # IFS=: 00:07:00.241 11:30:29 -- accel/accel.sh@20 -- # read -r var val 00:07:00.241 11:30:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.241 11:30:29 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:00.241 11:30:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.241 00:07:00.241 real 0m2.630s 00:07:00.241 user 0m2.367s 00:07:00.241 sys 0m0.272s 00:07:00.241 11:30:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.241 11:30:29 -- common/autotest_common.sh@10 -- # set +x 00:07:00.241 ************************************ 00:07:00.241 END TEST accel_xor 00:07:00.241 ************************************ 00:07:00.241 11:30:29 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:00.241 11:30:29 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:00.241 11:30:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.241 11:30:29 -- common/autotest_common.sh@10 -- # set +x 00:07:00.241 ************************************ 00:07:00.241 START TEST accel_xor 00:07:00.241 ************************************ 00:07:00.241 11:30:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:00.241 11:30:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.241 11:30:29 -- accel/accel.sh@17 -- # local accel_module 00:07:00.241 11:30:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:00.241 11:30:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:00.241 11:30:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.241 11:30:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.241 11:30:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.241 11:30:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.241 11:30:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.241 11:30:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.241 11:30:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.241 11:30:29 -- accel/accel.sh@42 -- # jq -r . 00:07:00.241 [2024-07-21 11:30:29.345621] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:00.241 [2024-07-21 11:30:29.345851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189267 ] 00:07:00.241 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.241 [2024-07-21 11:30:29.430801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.241 [2024-07-21 11:30:29.465871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.616 11:30:30 -- accel/accel.sh@18 -- # out=' 00:07:01.616 SPDK Configuration: 00:07:01.616 Core mask: 0x1 00:07:01.616 00:07:01.616 Accel Perf Configuration: 00:07:01.616 Workload Type: xor 00:07:01.616 Source buffers: 3 00:07:01.616 Transfer size: 4096 bytes 00:07:01.616 Vector count 1 00:07:01.616 Module: software 00:07:01.616 Queue depth: 32 00:07:01.616 Allocate depth: 32 00:07:01.616 # threads/core: 1 00:07:01.616 Run time: 1 seconds 00:07:01.616 Verify: Yes 00:07:01.616 00:07:01.616 Running for 1 seconds... 00:07:01.616 00:07:01.616 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.616 ------------------------------------------------------------------------------------ 00:07:01.616 0,0 466016/s 1820 MiB/s 0 0 00:07:01.616 ==================================================================================== 00:07:01.616 Total 466016/s 1820 MiB/s 0 0' 00:07:01.616 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.616 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.616 11:30:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:01.616 11:30:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.616 11:30:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.616 11:30:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.616 11:30:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:01.616 11:30:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.616 11:30:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.616 11:30:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.616 11:30:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.616 11:30:30 -- accel/accel.sh@42 -- # jq -r . 00:07:01.616 [2024-07-21 11:30:30.659263] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:01.616 [2024-07-21 11:30:30.659339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189430 ] 00:07:01.616 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.616 [2024-07-21 11:30:30.742484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.616 [2024-07-21 11:30:30.777947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.616 11:30:30 -- accel/accel.sh@21 -- # val= 00:07:01.616 11:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.616 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.616 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.616 11:30:30 -- accel/accel.sh@21 -- # val= 00:07:01.616 11:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.616 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.616 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.616 11:30:30 -- accel/accel.sh@21 -- # val=0x1 00:07:01.616 11:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.616 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.616 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.616 11:30:30 -- accel/accel.sh@21 -- # val= 00:07:01.616 11:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.616 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.616 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.616 11:30:30 -- accel/accel.sh@21 -- # val= 00:07:01.616 11:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.616 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.616 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.616 11:30:30 -- accel/accel.sh@21 -- # val=xor 00:07:01.617 11:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.617 11:30:30 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.617 11:30:30 -- accel/accel.sh@21 -- # val=3 00:07:01.617 11:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.617 11:30:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.617 11:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.617 11:30:30 -- accel/accel.sh@21 -- # val= 00:07:01.617 11:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.617 11:30:30 -- accel/accel.sh@21 -- # val=software 00:07:01.617 11:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.617 11:30:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.617 11:30:30 -- accel/accel.sh@21 -- # val=32 00:07:01.617 11:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.617 11:30:30 -- accel/accel.sh@21 -- # val=32 00:07:01.617 11:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.617 11:30:30 -- accel/accel.sh@21 -- # val=1 00:07:01.617 11:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.617 11:30:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.617 11:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.617 11:30:30 -- accel/accel.sh@21 -- # val=Yes 00:07:01.617 11:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.617 11:30:30 -- accel/accel.sh@21 -- # val= 00:07:01.617 11:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.617 11:30:30 -- accel/accel.sh@21 -- # val= 00:07:01.617 11:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # IFS=: 00:07:01.617 11:30:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.550 11:30:31 -- accel/accel.sh@21 -- # val= 00:07:02.550 11:30:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.550 11:30:31 -- accel/accel.sh@20 -- # IFS=: 00:07:02.550 11:30:31 -- accel/accel.sh@20 -- # read -r var val 00:07:02.550 11:30:31 -- accel/accel.sh@21 -- # val= 00:07:02.550 11:30:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.550 11:30:31 -- accel/accel.sh@20 -- # IFS=: 00:07:02.550 11:30:31 -- accel/accel.sh@20 -- # read -r var val 00:07:02.550 11:30:31 -- accel/accel.sh@21 -- # val= 00:07:02.550 11:30:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.550 11:30:31 -- accel/accel.sh@20 -- # IFS=: 00:07:02.550 11:30:31 -- accel/accel.sh@20 -- # read -r var val 00:07:02.550 11:30:31 -- accel/accel.sh@21 -- # val= 00:07:02.550 11:30:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.550 11:30:31 -- accel/accel.sh@20 -- # IFS=: 00:07:02.550 11:30:31 -- accel/accel.sh@20 -- # read -r var val 00:07:02.550 11:30:31 -- accel/accel.sh@21 -- # val= 00:07:02.550 11:30:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.550 11:30:31 -- accel/accel.sh@20 -- # IFS=: 00:07:02.550 11:30:31 -- accel/accel.sh@20 -- # read -r var val 00:07:02.550 11:30:31 -- accel/accel.sh@21 -- # val= 00:07:02.550 11:30:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.550 11:30:31 -- accel/accel.sh@20 -- # IFS=: 00:07:02.550 11:30:31 -- accel/accel.sh@20 -- # read -r var val 00:07:02.550 11:30:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.550 11:30:31 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:02.550 11:30:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.550 00:07:02.550 real 0m2.632s 00:07:02.550 user 0m2.359s 00:07:02.550 sys 0m0.282s 00:07:02.550 11:30:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.550 11:30:31 -- common/autotest_common.sh@10 -- # set +x 00:07:02.550 ************************************ 00:07:02.550 END TEST accel_xor 00:07:02.550 ************************************ 00:07:02.808 11:30:31 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:02.808 11:30:31 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:02.808 11:30:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.808 11:30:31 -- common/autotest_common.sh@10 -- # set +x 00:07:02.808 ************************************ 00:07:02.808 START TEST accel_dif_verify 00:07:02.808 ************************************ 00:07:02.808 11:30:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:02.808 11:30:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.808 11:30:31 -- accel/accel.sh@17 -- # local accel_module 00:07:02.808 11:30:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:02.808 11:30:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:02.808 11:30:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.808 11:30:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.808 11:30:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.808 11:30:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.808 11:30:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.808 11:30:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.808 11:30:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.808 11:30:31 -- accel/accel.sh@42 -- # jq -r . 00:07:02.808 [2024-07-21 11:30:32.024308] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:02.808 [2024-07-21 11:30:32.024392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189632 ] 00:07:02.808 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.808 [2024-07-21 11:30:32.109434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.808 [2024-07-21 11:30:32.144888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.179 11:30:33 -- accel/accel.sh@18 -- # out=' 00:07:04.179 SPDK Configuration: 00:07:04.179 Core mask: 0x1 00:07:04.179 00:07:04.179 Accel Perf Configuration: 00:07:04.179 Workload Type: dif_verify 00:07:04.179 Vector size: 4096 bytes 00:07:04.179 Transfer size: 4096 bytes 00:07:04.179 Block size: 512 bytes 00:07:04.179 Metadata size: 8 bytes 00:07:04.179 Vector count 1 00:07:04.179 Module: software 00:07:04.179 Queue depth: 32 00:07:04.179 Allocate depth: 32 00:07:04.179 # threads/core: 1 00:07:04.179 Run time: 1 seconds 00:07:04.179 Verify: No 00:07:04.179 00:07:04.179 Running for 1 seconds... 00:07:04.179 00:07:04.179 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.179 ------------------------------------------------------------------------------------ 00:07:04.179 0,0 137248/s 544 MiB/s 0 0 00:07:04.179 ==================================================================================== 00:07:04.179 Total 137248/s 536 MiB/s 0 0' 00:07:04.179 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.179 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.179 11:30:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:04.179 11:30:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:04.179 11:30:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.179 11:30:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.179 11:30:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.179 11:30:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.179 11:30:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.179 11:30:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.179 11:30:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.179 11:30:33 -- accel/accel.sh@42 -- # jq -r . 00:07:04.179 [2024-07-21 11:30:33.325516] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:04.179 [2024-07-21 11:30:33.325570] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189846 ] 00:07:04.179 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.179 [2024-07-21 11:30:33.405306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.179 [2024-07-21 11:30:33.440446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.179 11:30:33 -- accel/accel.sh@21 -- # val= 00:07:04.179 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.179 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.179 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.179 11:30:33 -- accel/accel.sh@21 -- # val= 00:07:04.179 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.179 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.179 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.179 11:30:33 -- accel/accel.sh@21 -- # val=0x1 00:07:04.179 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.179 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.180 11:30:33 -- accel/accel.sh@21 -- # val= 00:07:04.180 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.180 11:30:33 -- accel/accel.sh@21 -- # val= 00:07:04.180 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.180 11:30:33 -- accel/accel.sh@21 -- # val=dif_verify 00:07:04.180 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.180 11:30:33 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.180 11:30:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.180 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.180 11:30:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.180 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.180 11:30:33 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:04.180 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.180 11:30:33 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:04.180 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.180 11:30:33 -- accel/accel.sh@21 -- # val= 00:07:04.180 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.180 11:30:33 -- accel/accel.sh@21 -- # val=software 00:07:04.180 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.180 11:30:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.180 11:30:33 -- accel/accel.sh@21 -- # val=32 00:07:04.180 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.180 11:30:33 -- accel/accel.sh@21 -- # val=32 00:07:04.180 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.180 11:30:33 -- accel/accel.sh@21 -- # val=1 00:07:04.180 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.180 11:30:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.180 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.180 11:30:33 -- accel/accel.sh@21 -- # val=No 00:07:04.180 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.180 11:30:33 -- accel/accel.sh@21 -- # val= 00:07:04.180 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.180 11:30:33 -- accel/accel.sh@21 -- # val= 00:07:04.180 11:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # IFS=: 00:07:04.180 11:30:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.547 11:30:34 -- accel/accel.sh@21 -- # val= 00:07:05.547 11:30:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.547 11:30:34 -- accel/accel.sh@20 -- # IFS=: 00:07:05.547 11:30:34 -- accel/accel.sh@20 -- # read -r var val 00:07:05.547 11:30:34 -- accel/accel.sh@21 -- # val= 00:07:05.547 11:30:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.547 11:30:34 -- accel/accel.sh@20 -- # IFS=: 00:07:05.547 11:30:34 -- accel/accel.sh@20 -- # read -r var val 00:07:05.547 11:30:34 -- accel/accel.sh@21 -- # val= 00:07:05.547 11:30:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.547 11:30:34 -- accel/accel.sh@20 -- # IFS=: 00:07:05.547 11:30:34 -- accel/accel.sh@20 -- # read -r var val 00:07:05.547 11:30:34 -- accel/accel.sh@21 -- # val= 00:07:05.548 11:30:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.548 11:30:34 -- accel/accel.sh@20 -- # IFS=: 00:07:05.548 11:30:34 -- accel/accel.sh@20 -- # read -r var val 00:07:05.548 11:30:34 -- accel/accel.sh@21 -- # val= 00:07:05.548 11:30:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.548 11:30:34 -- accel/accel.sh@20 -- # IFS=: 00:07:05.548 11:30:34 -- accel/accel.sh@20 -- # read -r var val 00:07:05.548 11:30:34 -- accel/accel.sh@21 -- # val= 00:07:05.548 11:30:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.548 11:30:34 -- accel/accel.sh@20 -- # IFS=: 00:07:05.548 11:30:34 -- accel/accel.sh@20 -- # read -r var val 00:07:05.548 11:30:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.548 11:30:34 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:05.548 11:30:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.548 00:07:05.548 real 0m2.618s 00:07:05.548 user 0m2.355s 00:07:05.548 sys 0m0.273s 00:07:05.548 11:30:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.548 11:30:34 -- common/autotest_common.sh@10 -- # set +x 00:07:05.548 ************************************ 00:07:05.548 END TEST accel_dif_verify 00:07:05.548 ************************************ 00:07:05.548 11:30:34 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:05.548 11:30:34 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:05.548 11:30:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:05.548 11:30:34 -- common/autotest_common.sh@10 -- # set +x 00:07:05.548 ************************************ 00:07:05.548 START TEST accel_dif_generate 00:07:05.548 ************************************ 00:07:05.548 11:30:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:05.548 11:30:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.548 11:30:34 -- accel/accel.sh@17 -- # local accel_module 00:07:05.548 11:30:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:05.548 11:30:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.548 11:30:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:05.548 11:30:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.548 11:30:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.548 11:30:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.548 11:30:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.548 11:30:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.548 11:30:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.548 11:30:34 -- accel/accel.sh@42 -- # jq -r . 00:07:05.548 [2024-07-21 11:30:34.684842] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:05.548 [2024-07-21 11:30:34.684924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190135 ] 00:07:05.548 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.548 [2024-07-21 11:30:34.766206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.548 [2024-07-21 11:30:34.801114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.918 11:30:35 -- accel/accel.sh@18 -- # out=' 00:07:06.918 SPDK Configuration: 00:07:06.918 Core mask: 0x1 00:07:06.918 00:07:06.918 Accel Perf Configuration: 00:07:06.918 Workload Type: dif_generate 00:07:06.918 Vector size: 4096 bytes 00:07:06.918 Transfer size: 4096 bytes 00:07:06.918 Block size: 512 bytes 00:07:06.918 Metadata size: 8 bytes 00:07:06.918 Vector count 1 00:07:06.918 Module: software 00:07:06.918 Queue depth: 32 00:07:06.918 Allocate depth: 32 00:07:06.918 # threads/core: 1 00:07:06.918 Run time: 1 seconds 00:07:06.918 Verify: No 00:07:06.918 00:07:06.918 Running for 1 seconds... 00:07:06.918 00:07:06.918 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.918 ------------------------------------------------------------------------------------ 00:07:06.918 0,0 165760/s 657 MiB/s 0 0 00:07:06.918 ==================================================================================== 00:07:06.918 Total 165760/s 647 MiB/s 0 0' 00:07:06.918 11:30:35 -- accel/accel.sh@20 -- # IFS=: 00:07:06.918 11:30:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:06.918 11:30:35 -- accel/accel.sh@20 -- # read -r var val 00:07:06.918 11:30:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.918 11:30:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:06.918 11:30:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.918 11:30:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.918 11:30:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.918 11:30:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.918 11:30:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.918 11:30:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.918 11:30:35 -- accel/accel.sh@42 -- # jq -r . 00:07:06.918 [2024-07-21 11:30:35.991927] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:06.918 [2024-07-21 11:30:35.991992] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190401 ] 00:07:06.918 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.919 [2024-07-21 11:30:36.074283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.919 [2024-07-21 11:30:36.108735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val= 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val= 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val=0x1 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val= 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val= 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val=dif_generate 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val= 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val=software 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val=32 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val=32 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val=1 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val=No 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val= 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:06.919 11:30:36 -- accel/accel.sh@21 -- # val= 00:07:06.919 11:30:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # IFS=: 00:07:06.919 11:30:36 -- accel/accel.sh@20 -- # read -r var val 00:07:07.854 11:30:37 -- accel/accel.sh@21 -- # val= 00:07:07.854 11:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.854 11:30:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.854 11:30:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.854 11:30:37 -- accel/accel.sh@21 -- # val= 00:07:07.854 11:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.854 11:30:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.854 11:30:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.854 11:30:37 -- accel/accel.sh@21 -- # val= 00:07:07.854 11:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.854 11:30:37 -- accel/accel.sh@20 -- # IFS=: 00:07:08.112 11:30:37 -- accel/accel.sh@20 -- # read -r var val 00:07:08.112 11:30:37 -- accel/accel.sh@21 -- # val= 00:07:08.112 11:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.112 11:30:37 -- accel/accel.sh@20 -- # IFS=: 00:07:08.112 11:30:37 -- accel/accel.sh@20 -- # read -r var val 00:07:08.112 11:30:37 -- accel/accel.sh@21 -- # val= 00:07:08.112 11:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.112 11:30:37 -- accel/accel.sh@20 -- # IFS=: 00:07:08.112 11:30:37 -- accel/accel.sh@20 -- # read -r var val 00:07:08.112 11:30:37 -- accel/accel.sh@21 -- # val= 00:07:08.112 11:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.112 11:30:37 -- accel/accel.sh@20 -- # IFS=: 00:07:08.112 11:30:37 -- accel/accel.sh@20 -- # read -r var val 00:07:08.112 11:30:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.112 11:30:37 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:08.112 11:30:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.112 00:07:08.112 real 0m2.623s 00:07:08.112 user 0m2.351s 00:07:08.112 sys 0m0.282s 00:07:08.112 11:30:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.112 11:30:37 -- common/autotest_common.sh@10 -- # set +x 00:07:08.112 ************************************ 00:07:08.112 END TEST accel_dif_generate 00:07:08.112 ************************************ 00:07:08.112 11:30:37 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:08.112 11:30:37 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:08.112 11:30:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.112 11:30:37 -- common/autotest_common.sh@10 -- # set +x 00:07:08.112 ************************************ 00:07:08.112 START TEST accel_dif_generate_copy 00:07:08.112 ************************************ 00:07:08.112 11:30:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:08.112 11:30:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.112 11:30:37 -- accel/accel.sh@17 -- # local accel_module 00:07:08.112 11:30:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:08.112 11:30:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:08.112 11:30:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.112 11:30:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.112 11:30:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.112 11:30:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.112 11:30:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.112 11:30:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.112 11:30:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.112 11:30:37 -- accel/accel.sh@42 -- # jq -r . 00:07:08.112 [2024-07-21 11:30:37.354363] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:08.112 [2024-07-21 11:30:37.354449] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190684 ] 00:07:08.112 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.112 [2024-07-21 11:30:37.438147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.112 [2024-07-21 11:30:37.473275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.487 11:30:38 -- accel/accel.sh@18 -- # out=' 00:07:09.487 SPDK Configuration: 00:07:09.487 Core mask: 0x1 00:07:09.487 00:07:09.487 Accel Perf Configuration: 00:07:09.487 Workload Type: dif_generate_copy 00:07:09.487 Vector size: 4096 bytes 00:07:09.487 Transfer size: 4096 bytes 00:07:09.487 Vector count 1 00:07:09.487 Module: software 00:07:09.487 Queue depth: 32 00:07:09.487 Allocate depth: 32 00:07:09.487 # threads/core: 1 00:07:09.487 Run time: 1 seconds 00:07:09.487 Verify: No 00:07:09.487 00:07:09.487 Running for 1 seconds... 00:07:09.487 00:07:09.487 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.487 ------------------------------------------------------------------------------------ 00:07:09.487 0,0 126720/s 502 MiB/s 0 0 00:07:09.487 ==================================================================================== 00:07:09.487 Total 126720/s 495 MiB/s 0 0' 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:09.487 11:30:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:09.487 11:30:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:09.487 11:30:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.487 11:30:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.487 11:30:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.487 11:30:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.487 11:30:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.487 11:30:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.487 11:30:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.487 11:30:38 -- accel/accel.sh@42 -- # jq -r . 00:07:09.487 [2024-07-21 11:30:38.667874] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:09.487 [2024-07-21 11:30:38.667962] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190950 ] 00:07:09.487 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.487 [2024-07-21 11:30:38.751120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.487 [2024-07-21 11:30:38.785328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.487 11:30:38 -- accel/accel.sh@21 -- # val= 00:07:09.487 11:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:09.487 11:30:38 -- accel/accel.sh@21 -- # val= 00:07:09.487 11:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:09.487 11:30:38 -- accel/accel.sh@21 -- # val=0x1 00:07:09.487 11:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:09.487 11:30:38 -- accel/accel.sh@21 -- # val= 00:07:09.487 11:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:09.487 11:30:38 -- accel/accel.sh@21 -- # val= 00:07:09.487 11:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:09.487 11:30:38 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:09.487 11:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.487 11:30:38 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:09.487 11:30:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.487 11:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:09.487 11:30:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.487 11:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:09.487 11:30:38 -- accel/accel.sh@21 -- # val= 00:07:09.487 11:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:09.487 11:30:38 -- accel/accel.sh@21 -- # val=software 00:07:09.487 11:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.487 11:30:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:09.487 11:30:38 -- accel/accel.sh@21 -- # val=32 00:07:09.487 11:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:09.487 11:30:38 -- accel/accel.sh@21 -- # val=32 00:07:09.487 11:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:09.487 11:30:38 -- accel/accel.sh@21 -- # val=1 00:07:09.487 11:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:09.487 11:30:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.487 11:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:09.487 11:30:38 -- accel/accel.sh@21 -- # val=No 00:07:09.487 11:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:09.487 11:30:38 -- accel/accel.sh@21 -- # val= 00:07:09.487 11:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:09.487 11:30:38 -- accel/accel.sh@21 -- # val= 00:07:09.487 11:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # IFS=: 00:07:09.487 11:30:38 -- accel/accel.sh@20 -- # read -r var val 00:07:10.862 11:30:39 -- accel/accel.sh@21 -- # val= 00:07:10.862 11:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.862 11:30:39 -- accel/accel.sh@20 -- # IFS=: 00:07:10.862 11:30:39 -- accel/accel.sh@20 -- # read -r var val 00:07:10.862 11:30:39 -- accel/accel.sh@21 -- # val= 00:07:10.862 11:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.862 11:30:39 -- accel/accel.sh@20 -- # IFS=: 00:07:10.862 11:30:39 -- accel/accel.sh@20 -- # read -r var val 00:07:10.862 11:30:39 -- accel/accel.sh@21 -- # val= 00:07:10.862 11:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.862 11:30:39 -- accel/accel.sh@20 -- # IFS=: 00:07:10.862 11:30:39 -- accel/accel.sh@20 -- # read -r var val 00:07:10.862 11:30:39 -- accel/accel.sh@21 -- # val= 00:07:10.862 11:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.862 11:30:39 -- accel/accel.sh@20 -- # IFS=: 00:07:10.862 11:30:39 -- accel/accel.sh@20 -- # read -r var val 00:07:10.862 11:30:39 -- accel/accel.sh@21 -- # val= 00:07:10.862 11:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.862 11:30:39 -- accel/accel.sh@20 -- # IFS=: 00:07:10.862 11:30:39 -- accel/accel.sh@20 -- # read -r var val 00:07:10.862 11:30:39 -- accel/accel.sh@21 -- # val= 00:07:10.862 11:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.862 11:30:39 -- accel/accel.sh@20 -- # IFS=: 00:07:10.862 11:30:39 -- accel/accel.sh@20 -- # read -r var val 00:07:10.862 11:30:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.862 11:30:39 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:10.862 11:30:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.862 00:07:10.862 real 0m2.632s 00:07:10.862 user 0m2.362s 00:07:10.862 sys 0m0.279s 00:07:10.862 11:30:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.862 11:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:10.862 ************************************ 00:07:10.862 END TEST accel_dif_generate_copy 00:07:10.862 ************************************ 00:07:10.862 11:30:39 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:10.862 11:30:39 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:10.862 11:30:39 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:10.862 11:30:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.862 11:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:10.862 ************************************ 00:07:10.862 START TEST accel_comp 00:07:10.862 ************************************ 00:07:10.862 11:30:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:10.862 11:30:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.862 11:30:40 -- accel/accel.sh@17 -- # local accel_module 00:07:10.862 11:30:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:10.862 11:30:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:10.862 11:30:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.862 11:30:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.862 11:30:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.862 11:30:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.862 11:30:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.862 11:30:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.862 11:30:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.862 11:30:40 -- accel/accel.sh@42 -- # jq -r . 00:07:10.862 [2024-07-21 11:30:40.020166] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:10.862 [2024-07-21 11:30:40.020224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191161 ] 00:07:10.862 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.862 [2024-07-21 11:30:40.102808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.862 [2024-07-21 11:30:40.139542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.305 11:30:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:12.305 00:07:12.305 SPDK Configuration: 00:07:12.305 Core mask: 0x1 00:07:12.305 00:07:12.305 Accel Perf Configuration: 00:07:12.305 Workload Type: compress 00:07:12.305 Transfer size: 4096 bytes 00:07:12.305 Vector count 1 00:07:12.305 Module: software 00:07:12.305 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:12.305 Queue depth: 32 00:07:12.305 Allocate depth: 32 00:07:12.305 # threads/core: 1 00:07:12.305 Run time: 1 seconds 00:07:12.305 Verify: No 00:07:12.305 00:07:12.305 Running for 1 seconds... 00:07:12.305 00:07:12.305 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.305 ------------------------------------------------------------------------------------ 00:07:12.305 0,0 63520/s 264 MiB/s 0 0 00:07:12.305 ==================================================================================== 00:07:12.305 Total 63520/s 248 MiB/s 0 0' 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:12.305 11:30:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.305 11:30:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.305 11:30:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:12.305 11:30:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.305 11:30:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.305 11:30:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.305 11:30:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.305 11:30:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.305 11:30:41 -- accel/accel.sh@42 -- # jq -r . 00:07:12.305 [2024-07-21 11:30:41.338935] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:12.305 [2024-07-21 11:30:41.339005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191313 ] 00:07:12.305 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.305 [2024-07-21 11:30:41.423011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.305 [2024-07-21 11:30:41.457741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val= 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val= 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val= 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val=0x1 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val= 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val= 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val=compress 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val= 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val=software 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val=32 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val=32 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val=1 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val=No 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val= 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.305 11:30:41 -- accel/accel.sh@21 -- # val= 00:07:12.305 11:30:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # IFS=: 00:07:12.305 11:30:41 -- accel/accel.sh@20 -- # read -r var val 00:07:13.235 11:30:42 -- accel/accel.sh@21 -- # val= 00:07:13.235 11:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.235 11:30:42 -- accel/accel.sh@20 -- # IFS=: 00:07:13.235 11:30:42 -- accel/accel.sh@20 -- # read -r var val 00:07:13.235 11:30:42 -- accel/accel.sh@21 -- # val= 00:07:13.235 11:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.235 11:30:42 -- accel/accel.sh@20 -- # IFS=: 00:07:13.235 11:30:42 -- accel/accel.sh@20 -- # read -r var val 00:07:13.235 11:30:42 -- accel/accel.sh@21 -- # val= 00:07:13.235 11:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.235 11:30:42 -- accel/accel.sh@20 -- # IFS=: 00:07:13.235 11:30:42 -- accel/accel.sh@20 -- # read -r var val 00:07:13.235 11:30:42 -- accel/accel.sh@21 -- # val= 00:07:13.235 11:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.235 11:30:42 -- accel/accel.sh@20 -- # IFS=: 00:07:13.235 11:30:42 -- accel/accel.sh@20 -- # read -r var val 00:07:13.235 11:30:42 -- accel/accel.sh@21 -- # val= 00:07:13.236 11:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.236 11:30:42 -- accel/accel.sh@20 -- # IFS=: 00:07:13.236 11:30:42 -- accel/accel.sh@20 -- # read -r var val 00:07:13.236 11:30:42 -- accel/accel.sh@21 -- # val= 00:07:13.236 11:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.236 11:30:42 -- accel/accel.sh@20 -- # IFS=: 00:07:13.236 11:30:42 -- accel/accel.sh@20 -- # read -r var val 00:07:13.236 11:30:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.236 11:30:42 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:13.236 11:30:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.236 00:07:13.236 real 0m2.628s 00:07:13.236 user 0m2.363s 00:07:13.236 sys 0m0.276s 00:07:13.236 11:30:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.236 11:30:42 -- common/autotest_common.sh@10 -- # set +x 00:07:13.236 ************************************ 00:07:13.236 END TEST accel_comp 00:07:13.236 ************************************ 00:07:13.492 11:30:42 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:13.492 11:30:42 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:13.493 11:30:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.493 11:30:42 -- common/autotest_common.sh@10 -- # set +x 00:07:13.493 ************************************ 00:07:13.493 START TEST accel_decomp 00:07:13.493 ************************************ 00:07:13.493 11:30:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:13.493 11:30:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.493 11:30:42 -- accel/accel.sh@17 -- # local accel_module 00:07:13.493 11:30:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:13.493 11:30:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:13.493 11:30:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.493 11:30:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.493 11:30:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.493 11:30:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.493 11:30:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.493 11:30:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.493 11:30:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.493 11:30:42 -- accel/accel.sh@42 -- # jq -r . 00:07:13.493 [2024-07-21 11:30:42.705003] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:13.493 [2024-07-21 11:30:42.705073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191546 ] 00:07:13.493 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.493 [2024-07-21 11:30:42.792860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.493 [2024-07-21 11:30:42.826801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.859 11:30:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:14.859 00:07:14.859 SPDK Configuration: 00:07:14.859 Core mask: 0x1 00:07:14.859 00:07:14.859 Accel Perf Configuration: 00:07:14.859 Workload Type: decompress 00:07:14.859 Transfer size: 4096 bytes 00:07:14.859 Vector count 1 00:07:14.859 Module: software 00:07:14.859 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:14.859 Queue depth: 32 00:07:14.859 Allocate depth: 32 00:07:14.859 # threads/core: 1 00:07:14.859 Run time: 1 seconds 00:07:14.859 Verify: Yes 00:07:14.859 00:07:14.859 Running for 1 seconds... 00:07:14.859 00:07:14.859 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.859 ------------------------------------------------------------------------------------ 00:07:14.859 0,0 87168/s 160 MiB/s 0 0 00:07:14.859 ==================================================================================== 00:07:14.860 Total 87168/s 340 MiB/s 0 0' 00:07:14.860 11:30:43 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:43 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:14.860 11:30:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:14.860 11:30:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.860 11:30:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.860 11:30:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.860 11:30:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.860 11:30:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.860 11:30:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.860 11:30:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.860 11:30:44 -- accel/accel.sh@42 -- # jq -r . 00:07:14.860 [2024-07-21 11:30:44.021336] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:14.860 [2024-07-21 11:30:44.021406] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191820 ] 00:07:14.860 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.860 [2024-07-21 11:30:44.105024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.860 [2024-07-21 11:30:44.139344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val= 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val= 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val= 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val=0x1 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val= 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val= 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val=decompress 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val= 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val=software 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val=32 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val=32 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val=1 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val=Yes 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val= 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.860 11:30:44 -- accel/accel.sh@21 -- # val= 00:07:14.860 11:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.860 11:30:44 -- accel/accel.sh@20 -- # read -r var val 00:07:16.230 11:30:45 -- accel/accel.sh@21 -- # val= 00:07:16.230 11:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.230 11:30:45 -- accel/accel.sh@20 -- # IFS=: 00:07:16.230 11:30:45 -- accel/accel.sh@20 -- # read -r var val 00:07:16.230 11:30:45 -- accel/accel.sh@21 -- # val= 00:07:16.230 11:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.230 11:30:45 -- accel/accel.sh@20 -- # IFS=: 00:07:16.230 11:30:45 -- accel/accel.sh@20 -- # read -r var val 00:07:16.230 11:30:45 -- accel/accel.sh@21 -- # val= 00:07:16.230 11:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.230 11:30:45 -- accel/accel.sh@20 -- # IFS=: 00:07:16.230 11:30:45 -- accel/accel.sh@20 -- # read -r var val 00:07:16.230 11:30:45 -- accel/accel.sh@21 -- # val= 00:07:16.230 11:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.230 11:30:45 -- accel/accel.sh@20 -- # IFS=: 00:07:16.230 11:30:45 -- accel/accel.sh@20 -- # read -r var val 00:07:16.230 11:30:45 -- accel/accel.sh@21 -- # val= 00:07:16.230 11:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.230 11:30:45 -- accel/accel.sh@20 -- # IFS=: 00:07:16.230 11:30:45 -- accel/accel.sh@20 -- # read -r var val 00:07:16.230 11:30:45 -- accel/accel.sh@21 -- # val= 00:07:16.230 11:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.230 11:30:45 -- accel/accel.sh@20 -- # IFS=: 00:07:16.230 11:30:45 -- accel/accel.sh@20 -- # read -r var val 00:07:16.230 11:30:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.230 11:30:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:16.230 11:30:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.230 00:07:16.230 real 0m2.638s 00:07:16.230 user 0m2.365s 00:07:16.230 sys 0m0.283s 00:07:16.230 11:30:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.230 11:30:45 -- common/autotest_common.sh@10 -- # set +x 00:07:16.230 ************************************ 00:07:16.230 END TEST accel_decomp 00:07:16.230 ************************************ 00:07:16.230 11:30:45 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:16.230 11:30:45 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:16.230 11:30:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.230 11:30:45 -- common/autotest_common.sh@10 -- # set +x 00:07:16.230 ************************************ 00:07:16.230 START TEST accel_decmop_full 00:07:16.230 ************************************ 00:07:16.230 11:30:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:16.230 11:30:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.230 11:30:45 -- accel/accel.sh@17 -- # local accel_module 00:07:16.230 11:30:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:16.230 11:30:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:16.230 11:30:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.230 11:30:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.230 11:30:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.230 11:30:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.230 11:30:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.230 11:30:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.230 11:30:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.230 11:30:45 -- accel/accel.sh@42 -- # jq -r . 00:07:16.230 [2024-07-21 11:30:45.381337] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:16.230 [2024-07-21 11:30:45.381422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192101 ] 00:07:16.230 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.230 [2024-07-21 11:30:45.463908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.230 [2024-07-21 11:30:45.498851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.604 11:30:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:17.604 00:07:17.604 SPDK Configuration: 00:07:17.604 Core mask: 0x1 00:07:17.604 00:07:17.604 Accel Perf Configuration: 00:07:17.604 Workload Type: decompress 00:07:17.604 Transfer size: 111250 bytes 00:07:17.604 Vector count 1 00:07:17.604 Module: software 00:07:17.604 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:17.604 Queue depth: 32 00:07:17.604 Allocate depth: 32 00:07:17.604 # threads/core: 1 00:07:17.604 Run time: 1 seconds 00:07:17.604 Verify: Yes 00:07:17.604 00:07:17.604 Running for 1 seconds... 00:07:17.604 00:07:17.604 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.604 ------------------------------------------------------------------------------------ 00:07:17.604 0,0 5600/s 231 MiB/s 0 0 00:07:17.604 ==================================================================================== 00:07:17.604 Total 5600/s 594 MiB/s 0 0' 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:17.604 11:30:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.604 11:30:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.604 11:30:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:17.604 11:30:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.604 11:30:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.604 11:30:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.604 11:30:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.604 11:30:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.604 11:30:46 -- accel/accel.sh@42 -- # jq -r . 00:07:17.604 [2024-07-21 11:30:46.701402] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:17.604 [2024-07-21 11:30:46.701471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192383 ] 00:07:17.604 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.604 [2024-07-21 11:30:46.783959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.604 [2024-07-21 11:30:46.818265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val= 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val= 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val= 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val=0x1 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val= 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val= 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val=decompress 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val= 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val=software 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val=32 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val=32 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val=1 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val=Yes 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val= 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.604 11:30:46 -- accel/accel.sh@21 -- # val= 00:07:17.604 11:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # IFS=: 00:07:17.604 11:30:46 -- accel/accel.sh@20 -- # read -r var val 00:07:18.980 11:30:47 -- accel/accel.sh@21 -- # val= 00:07:18.980 11:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.980 11:30:47 -- accel/accel.sh@20 -- # IFS=: 00:07:18.980 11:30:47 -- accel/accel.sh@20 -- # read -r var val 00:07:18.980 11:30:47 -- accel/accel.sh@21 -- # val= 00:07:18.980 11:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.980 11:30:47 -- accel/accel.sh@20 -- # IFS=: 00:07:18.980 11:30:47 -- accel/accel.sh@20 -- # read -r var val 00:07:18.980 11:30:47 -- accel/accel.sh@21 -- # val= 00:07:18.980 11:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.980 11:30:47 -- accel/accel.sh@20 -- # IFS=: 00:07:18.980 11:30:47 -- accel/accel.sh@20 -- # read -r var val 00:07:18.980 11:30:47 -- accel/accel.sh@21 -- # val= 00:07:18.980 11:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.980 11:30:47 -- accel/accel.sh@20 -- # IFS=: 00:07:18.980 11:30:47 -- accel/accel.sh@20 -- # read -r var val 00:07:18.980 11:30:47 -- accel/accel.sh@21 -- # val= 00:07:18.980 11:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.980 11:30:47 -- accel/accel.sh@20 -- # IFS=: 00:07:18.980 11:30:47 -- accel/accel.sh@20 -- # read -r var val 00:07:18.980 11:30:47 -- accel/accel.sh@21 -- # val= 00:07:18.980 11:30:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.980 11:30:48 -- accel/accel.sh@20 -- # IFS=: 00:07:18.980 11:30:48 -- accel/accel.sh@20 -- # read -r var val 00:07:18.980 11:30:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:18.980 11:30:48 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:18.980 11:30:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.980 00:07:18.980 real 0m2.650s 00:07:18.980 user 0m2.387s 00:07:18.980 sys 0m0.271s 00:07:18.980 11:30:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.980 11:30:48 -- common/autotest_common.sh@10 -- # set +x 00:07:18.980 ************************************ 00:07:18.980 END TEST accel_decmop_full 00:07:18.980 ************************************ 00:07:18.980 11:30:48 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:18.980 11:30:48 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:18.980 11:30:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.980 11:30:48 -- common/autotest_common.sh@10 -- # set +x 00:07:18.980 ************************************ 00:07:18.980 START TEST accel_decomp_mcore 00:07:18.980 ************************************ 00:07:18.980 11:30:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:18.980 11:30:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.980 11:30:48 -- accel/accel.sh@17 -- # local accel_module 00:07:18.980 11:30:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:18.980 11:30:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:18.980 11:30:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.980 11:30:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.980 11:30:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.980 11:30:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.980 11:30:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.980 11:30:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.980 11:30:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.980 11:30:48 -- accel/accel.sh@42 -- # jq -r . 00:07:18.980 [2024-07-21 11:30:48.073848] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:18.980 [2024-07-21 11:30:48.073938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192672 ] 00:07:18.980 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.980 [2024-07-21 11:30:48.158317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.980 [2024-07-21 11:30:48.195852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.980 [2024-07-21 11:30:48.195949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.980 [2024-07-21 11:30:48.196020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.980 [2024-07-21 11:30:48.196022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.355 11:30:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:20.355 00:07:20.355 SPDK Configuration: 00:07:20.355 Core mask: 0xf 00:07:20.355 00:07:20.355 Accel Perf Configuration: 00:07:20.355 Workload Type: decompress 00:07:20.355 Transfer size: 4096 bytes 00:07:20.355 Vector count 1 00:07:20.355 Module: software 00:07:20.355 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:20.355 Queue depth: 32 00:07:20.355 Allocate depth: 32 00:07:20.355 # threads/core: 1 00:07:20.355 Run time: 1 seconds 00:07:20.355 Verify: Yes 00:07:20.355 00:07:20.355 Running for 1 seconds... 00:07:20.355 00:07:20.355 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.355 ------------------------------------------------------------------------------------ 00:07:20.355 0,0 70464/s 129 MiB/s 0 0 00:07:20.355 3,0 74432/s 137 MiB/s 0 0 00:07:20.355 2,0 73728/s 135 MiB/s 0 0 00:07:20.355 1,0 73760/s 135 MiB/s 0 0 00:07:20.355 ==================================================================================== 00:07:20.355 Total 292384/s 1142 MiB/s 0 0' 00:07:20.355 11:30:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:20.355 11:30:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.355 11:30:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.355 11:30:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.355 11:30:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.355 11:30:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.355 11:30:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.355 11:30:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.355 11:30:49 -- accel/accel.sh@42 -- # jq -r . 00:07:20.355 [2024-07-21 11:30:49.383879] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:20.355 [2024-07-21 11:30:49.383934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192913 ] 00:07:20.355 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.355 [2024-07-21 11:30:49.460775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.355 [2024-07-21 11:30:49.498442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.355 [2024-07-21 11:30:49.498536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.355 [2024-07-21 11:30:49.498634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.355 [2024-07-21 11:30:49.498641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val= 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val= 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val= 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val=0xf 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val= 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val= 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val=decompress 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val= 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val=software 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val=32 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val=32 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val=1 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val=Yes 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val= 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.355 11:30:49 -- accel/accel.sh@21 -- # val= 00:07:20.355 11:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # IFS=: 00:07:20.355 11:30:49 -- accel/accel.sh@20 -- # read -r var val 00:07:21.288 11:30:50 -- accel/accel.sh@21 -- # val= 00:07:21.288 11:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # IFS=: 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # read -r var val 00:07:21.288 11:30:50 -- accel/accel.sh@21 -- # val= 00:07:21.288 11:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # IFS=: 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # read -r var val 00:07:21.288 11:30:50 -- accel/accel.sh@21 -- # val= 00:07:21.288 11:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # IFS=: 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # read -r var val 00:07:21.288 11:30:50 -- accel/accel.sh@21 -- # val= 00:07:21.288 11:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # IFS=: 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # read -r var val 00:07:21.288 11:30:50 -- accel/accel.sh@21 -- # val= 00:07:21.288 11:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # IFS=: 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # read -r var val 00:07:21.288 11:30:50 -- accel/accel.sh@21 -- # val= 00:07:21.288 11:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # IFS=: 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # read -r var val 00:07:21.288 11:30:50 -- accel/accel.sh@21 -- # val= 00:07:21.288 11:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # IFS=: 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # read -r var val 00:07:21.288 11:30:50 -- accel/accel.sh@21 -- # val= 00:07:21.288 11:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # IFS=: 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # read -r var val 00:07:21.288 11:30:50 -- accel/accel.sh@21 -- # val= 00:07:21.288 11:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # IFS=: 00:07:21.288 11:30:50 -- accel/accel.sh@20 -- # read -r var val 00:07:21.288 11:30:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.288 11:30:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:21.288 11:30:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.288 00:07:21.288 real 0m2.638s 00:07:21.288 user 0m9.024s 00:07:21.288 sys 0m0.283s 00:07:21.288 11:30:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.288 11:30:50 -- common/autotest_common.sh@10 -- # set +x 00:07:21.288 ************************************ 00:07:21.288 END TEST accel_decomp_mcore 00:07:21.288 ************************************ 00:07:21.546 11:30:50 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.546 11:30:50 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:21.546 11:30:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.546 11:30:50 -- common/autotest_common.sh@10 -- # set +x 00:07:21.546 ************************************ 00:07:21.546 START TEST accel_decomp_full_mcore 00:07:21.546 ************************************ 00:07:21.546 11:30:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.546 11:30:50 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.546 11:30:50 -- accel/accel.sh@17 -- # local accel_module 00:07:21.546 11:30:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.546 11:30:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.546 11:30:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.546 11:30:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.546 11:30:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.546 11:30:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.546 11:30:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.546 11:30:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.546 11:30:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.546 11:30:50 -- accel/accel.sh@42 -- # jq -r . 00:07:21.546 [2024-07-21 11:30:50.738542] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:21.546 [2024-07-21 11:30:50.738593] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193102 ] 00:07:21.546 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.546 [2024-07-21 11:30:50.819863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.546 [2024-07-21 11:30:50.858260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.546 [2024-07-21 11:30:50.858357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.546 [2024-07-21 11:30:50.858418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.546 [2024-07-21 11:30:50.858420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.926 11:30:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:22.926 00:07:22.926 SPDK Configuration: 00:07:22.926 Core mask: 0xf 00:07:22.926 00:07:22.926 Accel Perf Configuration: 00:07:22.926 Workload Type: decompress 00:07:22.926 Transfer size: 111250 bytes 00:07:22.926 Vector count 1 00:07:22.926 Module: software 00:07:22.926 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:22.926 Queue depth: 32 00:07:22.926 Allocate depth: 32 00:07:22.926 # threads/core: 1 00:07:22.926 Run time: 1 seconds 00:07:22.926 Verify: Yes 00:07:22.926 00:07:22.926 Running for 1 seconds... 00:07:22.926 00:07:22.926 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.926 ------------------------------------------------------------------------------------ 00:07:22.926 0,0 5728/s 236 MiB/s 0 0 00:07:22.926 3,0 5728/s 236 MiB/s 0 0 00:07:22.926 2,0 5728/s 236 MiB/s 0 0 00:07:22.926 1,0 5728/s 236 MiB/s 0 0 00:07:22.926 ==================================================================================== 00:07:22.926 Total 22912/s 2430 MiB/s 0 0' 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.926 11:30:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.926 11:30:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.926 11:30:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.926 11:30:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.926 11:30:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.926 11:30:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.926 11:30:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.926 11:30:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.926 11:30:52 -- accel/accel.sh@42 -- # jq -r . 00:07:22.926 [2024-07-21 11:30:52.071502] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:22.926 [2024-07-21 11:30:52.071570] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193273 ] 00:07:22.926 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.926 [2024-07-21 11:30:52.156320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.926 [2024-07-21 11:30:52.193592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.926 [2024-07-21 11:30:52.193692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.926 [2024-07-21 11:30:52.193715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.926 [2024-07-21 11:30:52.193717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val= 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val= 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val= 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val=0xf 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val= 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val= 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val=decompress 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val= 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val=software 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val=32 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val=32 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val=1 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val=Yes 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val= 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.926 11:30:52 -- accel/accel.sh@21 -- # val= 00:07:22.926 11:30:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.926 11:30:52 -- accel/accel.sh@20 -- # read -r var val 00:07:24.300 11:30:53 -- accel/accel.sh@21 -- # val= 00:07:24.300 11:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # IFS=: 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # read -r var val 00:07:24.300 11:30:53 -- accel/accel.sh@21 -- # val= 00:07:24.300 11:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # IFS=: 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # read -r var val 00:07:24.300 11:30:53 -- accel/accel.sh@21 -- # val= 00:07:24.300 11:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # IFS=: 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # read -r var val 00:07:24.300 11:30:53 -- accel/accel.sh@21 -- # val= 00:07:24.300 11:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # IFS=: 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # read -r var val 00:07:24.300 11:30:53 -- accel/accel.sh@21 -- # val= 00:07:24.300 11:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # IFS=: 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # read -r var val 00:07:24.300 11:30:53 -- accel/accel.sh@21 -- # val= 00:07:24.300 11:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # IFS=: 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # read -r var val 00:07:24.300 11:30:53 -- accel/accel.sh@21 -- # val= 00:07:24.300 11:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # IFS=: 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # read -r var val 00:07:24.300 11:30:53 -- accel/accel.sh@21 -- # val= 00:07:24.300 11:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # IFS=: 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # read -r var val 00:07:24.300 11:30:53 -- accel/accel.sh@21 -- # val= 00:07:24.300 11:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # IFS=: 00:07:24.300 11:30:53 -- accel/accel.sh@20 -- # read -r var val 00:07:24.300 11:30:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.300 11:30:53 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:24.300 11:30:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.300 00:07:24.300 real 0m2.659s 00:07:24.300 user 0m9.088s 00:07:24.300 sys 0m0.283s 00:07:24.300 11:30:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.300 11:30:53 -- common/autotest_common.sh@10 -- # set +x 00:07:24.300 ************************************ 00:07:24.300 END TEST accel_decomp_full_mcore 00:07:24.300 ************************************ 00:07:24.300 11:30:53 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.300 11:30:53 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:24.301 11:30:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.301 11:30:53 -- common/autotest_common.sh@10 -- # set +x 00:07:24.301 ************************************ 00:07:24.301 START TEST accel_decomp_mthread 00:07:24.301 ************************************ 00:07:24.301 11:30:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.301 11:30:53 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.301 11:30:53 -- accel/accel.sh@17 -- # local accel_module 00:07:24.301 11:30:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.301 11:30:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.301 11:30:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.301 11:30:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.301 11:30:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.301 11:30:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.301 11:30:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.301 11:30:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.301 11:30:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.301 11:30:53 -- accel/accel.sh@42 -- # jq -r . 00:07:24.301 [2024-07-21 11:30:53.451244] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:24.301 [2024-07-21 11:30:53.451318] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193544 ] 00:07:24.301 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.301 [2024-07-21 11:30:53.533344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.301 [2024-07-21 11:30:53.568614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.676 11:30:54 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:25.676 00:07:25.676 SPDK Configuration: 00:07:25.676 Core mask: 0x1 00:07:25.676 00:07:25.676 Accel Perf Configuration: 00:07:25.676 Workload Type: decompress 00:07:25.676 Transfer size: 4096 bytes 00:07:25.676 Vector count 1 00:07:25.676 Module: software 00:07:25.676 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:25.676 Queue depth: 32 00:07:25.676 Allocate depth: 32 00:07:25.676 # threads/core: 2 00:07:25.676 Run time: 1 seconds 00:07:25.676 Verify: Yes 00:07:25.676 00:07:25.676 Running for 1 seconds... 00:07:25.676 00:07:25.676 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.676 ------------------------------------------------------------------------------------ 00:07:25.676 0,1 43552/s 80 MiB/s 0 0 00:07:25.676 0,0 43392/s 79 MiB/s 0 0 00:07:25.676 ==================================================================================== 00:07:25.676 Total 86944/s 339 MiB/s 0 0' 00:07:25.676 11:30:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:25.676 11:30:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.676 11:30:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.676 11:30:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.676 11:30:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.676 11:30:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.676 11:30:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.676 11:30:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.676 11:30:54 -- accel/accel.sh@42 -- # jq -r . 00:07:25.676 [2024-07-21 11:30:54.752319] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:25.676 [2024-07-21 11:30:54.752373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193812 ] 00:07:25.676 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.676 [2024-07-21 11:30:54.831333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.676 [2024-07-21 11:30:54.866161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val= 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val= 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val= 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val=0x1 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val= 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val= 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val=decompress 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val= 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val=software 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val=32 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val=32 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val=2 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val=Yes 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val= 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:25.676 11:30:54 -- accel/accel.sh@21 -- # val= 00:07:25.676 11:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # IFS=: 00:07:25.676 11:30:54 -- accel/accel.sh@20 -- # read -r var val 00:07:27.053 11:30:56 -- accel/accel.sh@21 -- # val= 00:07:27.053 11:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.053 11:30:56 -- accel/accel.sh@20 -- # IFS=: 00:07:27.053 11:30:56 -- accel/accel.sh@20 -- # read -r var val 00:07:27.053 11:30:56 -- accel/accel.sh@21 -- # val= 00:07:27.053 11:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.053 11:30:56 -- accel/accel.sh@20 -- # IFS=: 00:07:27.053 11:30:56 -- accel/accel.sh@20 -- # read -r var val 00:07:27.053 11:30:56 -- accel/accel.sh@21 -- # val= 00:07:27.053 11:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.053 11:30:56 -- accel/accel.sh@20 -- # IFS=: 00:07:27.053 11:30:56 -- accel/accel.sh@20 -- # read -r var val 00:07:27.053 11:30:56 -- accel/accel.sh@21 -- # val= 00:07:27.053 11:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.053 11:30:56 -- accel/accel.sh@20 -- # IFS=: 00:07:27.053 11:30:56 -- accel/accel.sh@20 -- # read -r var val 00:07:27.053 11:30:56 -- accel/accel.sh@21 -- # val= 00:07:27.053 11:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.053 11:30:56 -- accel/accel.sh@20 -- # IFS=: 00:07:27.053 11:30:56 -- accel/accel.sh@20 -- # read -r var val 00:07:27.053 11:30:56 -- accel/accel.sh@21 -- # val= 00:07:27.053 11:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.053 11:30:56 -- accel/accel.sh@20 -- # IFS=: 00:07:27.053 11:30:56 -- accel/accel.sh@20 -- # read -r var val 00:07:27.053 11:30:56 -- accel/accel.sh@21 -- # val= 00:07:27.053 11:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.053 11:30:56 -- accel/accel.sh@20 -- # IFS=: 00:07:27.053 11:30:56 -- accel/accel.sh@20 -- # read -r var val 00:07:27.053 11:30:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.053 11:30:56 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:27.053 11:30:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.053 00:07:27.053 real 0m2.620s 00:07:27.053 user 0m2.353s 00:07:27.053 sys 0m0.276s 00:07:27.053 11:30:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.053 11:30:56 -- common/autotest_common.sh@10 -- # set +x 00:07:27.053 ************************************ 00:07:27.053 END TEST accel_decomp_mthread 00:07:27.053 ************************************ 00:07:27.053 11:30:56 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.053 11:30:56 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:27.053 11:30:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.053 11:30:56 -- common/autotest_common.sh@10 -- # set +x 00:07:27.053 ************************************ 00:07:27.053 START TEST accel_deomp_full_mthread 00:07:27.053 ************************************ 00:07:27.053 11:30:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.053 11:30:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.053 11:30:56 -- accel/accel.sh@17 -- # local accel_module 00:07:27.053 11:30:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.053 11:30:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.053 11:30:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.053 11:30:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.053 11:30:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.053 11:30:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.053 11:30:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.053 11:30:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.053 11:30:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.053 11:30:56 -- accel/accel.sh@42 -- # jq -r . 00:07:27.053 [2024-07-21 11:30:56.105382] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:27.053 [2024-07-21 11:30:56.105445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2194101 ] 00:07:27.053 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.053 [2024-07-21 11:30:56.190290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.053 [2024-07-21 11:30:56.225388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.428 11:30:57 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:28.428 00:07:28.428 SPDK Configuration: 00:07:28.428 Core mask: 0x1 00:07:28.428 00:07:28.428 Accel Perf Configuration: 00:07:28.428 Workload Type: decompress 00:07:28.428 Transfer size: 111250 bytes 00:07:28.428 Vector count 1 00:07:28.428 Module: software 00:07:28.428 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:28.428 Queue depth: 32 00:07:28.428 Allocate depth: 32 00:07:28.428 # threads/core: 2 00:07:28.428 Run time: 1 seconds 00:07:28.428 Verify: Yes 00:07:28.428 00:07:28.428 Running for 1 seconds... 00:07:28.428 00:07:28.428 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.428 ------------------------------------------------------------------------------------ 00:07:28.428 0,1 2912/s 120 MiB/s 0 0 00:07:28.428 0,0 2880/s 118 MiB/s 0 0 00:07:28.428 ==================================================================================== 00:07:28.428 Total 5792/s 614 MiB/s 0 0' 00:07:28.428 11:30:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:28.428 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.428 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:28.429 11:30:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.429 11:30:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.429 11:30:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.429 11:30:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.429 11:30:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.429 11:30:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.429 11:30:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.429 11:30:57 -- accel/accel.sh@42 -- # jq -r . 00:07:28.429 [2024-07-21 11:30:57.425161] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:28.429 [2024-07-21 11:30:57.425214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2194371 ] 00:07:28.429 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.429 [2024-07-21 11:30:57.502098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.429 [2024-07-21 11:30:57.537193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val= 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val= 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val= 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val=0x1 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val= 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val= 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val=decompress 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val= 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val=software 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val=32 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val=32 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val=2 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val=Yes 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val= 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:28.429 11:30:57 -- accel/accel.sh@21 -- # val= 00:07:28.429 11:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # IFS=: 00:07:28.429 11:30:57 -- accel/accel.sh@20 -- # read -r var val 00:07:29.364 11:30:58 -- accel/accel.sh@21 -- # val= 00:07:29.364 11:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.364 11:30:58 -- accel/accel.sh@20 -- # IFS=: 00:07:29.364 11:30:58 -- accel/accel.sh@20 -- # read -r var val 00:07:29.364 11:30:58 -- accel/accel.sh@21 -- # val= 00:07:29.364 11:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.364 11:30:58 -- accel/accel.sh@20 -- # IFS=: 00:07:29.364 11:30:58 -- accel/accel.sh@20 -- # read -r var val 00:07:29.364 11:30:58 -- accel/accel.sh@21 -- # val= 00:07:29.364 11:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.364 11:30:58 -- accel/accel.sh@20 -- # IFS=: 00:07:29.364 11:30:58 -- accel/accel.sh@20 -- # read -r var val 00:07:29.364 11:30:58 -- accel/accel.sh@21 -- # val= 00:07:29.364 11:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.364 11:30:58 -- accel/accel.sh@20 -- # IFS=: 00:07:29.364 11:30:58 -- accel/accel.sh@20 -- # read -r var val 00:07:29.364 11:30:58 -- accel/accel.sh@21 -- # val= 00:07:29.364 11:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.364 11:30:58 -- accel/accel.sh@20 -- # IFS=: 00:07:29.364 11:30:58 -- accel/accel.sh@20 -- # read -r var val 00:07:29.364 11:30:58 -- accel/accel.sh@21 -- # val= 00:07:29.364 11:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.364 11:30:58 -- accel/accel.sh@20 -- # IFS=: 00:07:29.364 11:30:58 -- accel/accel.sh@20 -- # read -r var val 00:07:29.364 11:30:58 -- accel/accel.sh@21 -- # val= 00:07:29.364 11:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.364 11:30:58 -- accel/accel.sh@20 -- # IFS=: 00:07:29.364 11:30:58 -- accel/accel.sh@20 -- # read -r var val 00:07:29.364 11:30:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.364 11:30:58 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:29.364 11:30:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.364 00:07:29.364 real 0m2.645s 00:07:29.364 user 0m2.383s 00:07:29.364 sys 0m0.270s 00:07:29.364 11:30:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.364 11:30:58 -- common/autotest_common.sh@10 -- # set +x 00:07:29.364 ************************************ 00:07:29.364 END TEST accel_deomp_full_mthread 00:07:29.364 ************************************ 00:07:29.364 11:30:58 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:29.364 11:30:58 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:29.364 11:30:58 -- accel/accel.sh@129 -- # build_accel_config 00:07:29.364 11:30:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:29.364 11:30:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.365 11:30:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.365 11:30:58 -- common/autotest_common.sh@10 -- # set +x 00:07:29.365 11:30:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.365 11:30:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.365 11:30:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.365 11:30:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.365 11:30:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.365 11:30:58 -- accel/accel.sh@42 -- # jq -r . 00:07:29.365 ************************************ 00:07:29.365 START TEST accel_dif_functional_tests 00:07:29.365 ************************************ 00:07:29.365 11:30:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:29.690 [2024-07-21 11:30:58.820691] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:29.690 [2024-07-21 11:30:58.820753] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2194655 ] 00:07:29.690 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.690 [2024-07-21 11:30:58.902489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.690 [2024-07-21 11:30:58.938436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.690 [2024-07-21 11:30:58.938527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.690 [2024-07-21 11:30:58.938529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.690 00:07:29.690 00:07:29.690 CUnit - A unit testing framework for C - Version 2.1-3 00:07:29.690 http://cunit.sourceforge.net/ 00:07:29.690 00:07:29.690 00:07:29.690 Suite: accel_dif 00:07:29.690 Test: verify: DIF generated, GUARD check ...passed 00:07:29.690 Test: verify: DIF generated, APPTAG check ...passed 00:07:29.690 Test: verify: DIF generated, REFTAG check ...passed 00:07:29.690 Test: verify: DIF not generated, GUARD check ...[2024-07-21 11:30:59.002047] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:29.690 [2024-07-21 11:30:59.002097] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:29.690 passed 00:07:29.690 Test: verify: DIF not generated, APPTAG check ...[2024-07-21 11:30:59.002128] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:29.690 [2024-07-21 11:30:59.002150] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:29.690 passed 00:07:29.691 Test: verify: DIF not generated, REFTAG check ...[2024-07-21 11:30:59.002168] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:29.691 [2024-07-21 11:30:59.002189] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:29.691 passed 00:07:29.691 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:29.691 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-21 11:30:59.002233] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:29.691 passed 00:07:29.691 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:29.691 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:29.691 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:29.691 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-21 11:30:59.002347] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:29.691 passed 00:07:29.691 Test: generate copy: DIF generated, GUARD check ...passed 00:07:29.691 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:29.691 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:29.691 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:29.691 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:29.691 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:29.691 Test: generate copy: iovecs-len validate ...[2024-07-21 11:30:59.002512] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:29.691 passed 00:07:29.691 Test: generate copy: buffer alignment validate ...passed 00:07:29.691 00:07:29.691 Run Summary: Type Total Ran Passed Failed Inactive 00:07:29.691 suites 1 1 n/a 0 0 00:07:29.691 tests 20 20 20 0 0 00:07:29.691 asserts 204 204 204 0 n/a 00:07:29.691 00:07:29.691 Elapsed time = 0.000 seconds 00:07:29.949 00:07:29.949 real 0m0.381s 00:07:29.949 user 0m0.555s 00:07:29.949 sys 0m0.167s 00:07:29.949 11:30:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.949 11:30:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.949 ************************************ 00:07:29.949 END TEST accel_dif_functional_tests 00:07:29.949 ************************************ 00:07:29.949 00:07:29.949 real 0m56.260s 00:07:29.949 user 1m3.474s 00:07:29.949 sys 0m7.467s 00:07:29.949 11:30:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.949 11:30:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.949 ************************************ 00:07:29.949 END TEST accel 00:07:29.949 ************************************ 00:07:29.949 11:30:59 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:29.949 11:30:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:29.949 11:30:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.949 11:30:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.949 ************************************ 00:07:29.949 START TEST accel_rpc 00:07:29.949 ************************************ 00:07:29.949 11:30:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:29.949 * Looking for test storage... 00:07:29.949 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:29.949 11:30:59 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:29.949 11:30:59 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2194721 00:07:29.949 11:30:59 -- accel/accel_rpc.sh@15 -- # waitforlisten 2194721 00:07:29.949 11:30:59 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:29.949 11:30:59 -- common/autotest_common.sh@819 -- # '[' -z 2194721 ']' 00:07:29.949 11:30:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.949 11:30:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:29.949 11:30:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.950 11:30:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:29.950 11:30:59 -- common/autotest_common.sh@10 -- # set +x 00:07:30.208 [2024-07-21 11:30:59.399433] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:30.208 [2024-07-21 11:30:59.399489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2194721 ] 00:07:30.208 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.208 [2024-07-21 11:30:59.485375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.208 [2024-07-21 11:30:59.522908] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:30.208 [2024-07-21 11:30:59.523023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.141 11:31:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:31.141 11:31:00 -- common/autotest_common.sh@852 -- # return 0 00:07:31.141 11:31:00 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:31.141 11:31:00 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:31.141 11:31:00 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:31.141 11:31:00 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:31.141 11:31:00 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:31.141 11:31:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:31.141 11:31:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.141 11:31:00 -- common/autotest_common.sh@10 -- # set +x 00:07:31.141 ************************************ 00:07:31.141 START TEST accel_assign_opcode 00:07:31.141 ************************************ 00:07:31.141 11:31:00 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:31.141 11:31:00 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:31.141 11:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:31.141 11:31:00 -- common/autotest_common.sh@10 -- # set +x 00:07:31.141 [2024-07-21 11:31:00.217073] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:31.141 11:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:31.141 11:31:00 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:31.141 11:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:31.141 11:31:00 -- common/autotest_common.sh@10 -- # set +x 00:07:31.141 [2024-07-21 11:31:00.225089] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:31.141 11:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:31.141 11:31:00 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:31.141 11:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:31.141 11:31:00 -- common/autotest_common.sh@10 -- # set +x 00:07:31.141 11:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:31.141 11:31:00 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:31.141 11:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:31.141 11:31:00 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:31.141 11:31:00 -- common/autotest_common.sh@10 -- # set +x 00:07:31.141 11:31:00 -- accel/accel_rpc.sh@42 -- # grep software 00:07:31.141 11:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:31.141 software 00:07:31.141 00:07:31.141 real 0m0.224s 00:07:31.141 user 0m0.044s 00:07:31.141 sys 0m0.016s 00:07:31.141 11:31:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.141 11:31:00 -- common/autotest_common.sh@10 -- # set +x 00:07:31.141 ************************************ 00:07:31.141 END TEST accel_assign_opcode 00:07:31.141 ************************************ 00:07:31.141 11:31:00 -- accel/accel_rpc.sh@55 -- # killprocess 2194721 00:07:31.141 11:31:00 -- common/autotest_common.sh@926 -- # '[' -z 2194721 ']' 00:07:31.141 11:31:00 -- common/autotest_common.sh@930 -- # kill -0 2194721 00:07:31.141 11:31:00 -- common/autotest_common.sh@931 -- # uname 00:07:31.141 11:31:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:31.141 11:31:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2194721 00:07:31.141 11:31:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:31.141 11:31:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:31.141 11:31:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2194721' 00:07:31.141 killing process with pid 2194721 00:07:31.141 11:31:00 -- common/autotest_common.sh@945 -- # kill 2194721 00:07:31.141 11:31:00 -- common/autotest_common.sh@950 -- # wait 2194721 00:07:31.708 00:07:31.708 real 0m1.579s 00:07:31.708 user 0m1.600s 00:07:31.708 sys 0m0.488s 00:07:31.708 11:31:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.708 11:31:00 -- common/autotest_common.sh@10 -- # set +x 00:07:31.708 ************************************ 00:07:31.708 END TEST accel_rpc 00:07:31.708 ************************************ 00:07:31.708 11:31:00 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:31.708 11:31:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:31.708 11:31:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.708 11:31:00 -- common/autotest_common.sh@10 -- # set +x 00:07:31.708 ************************************ 00:07:31.708 START TEST app_cmdline 00:07:31.708 ************************************ 00:07:31.708 11:31:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:31.708 * Looking for test storage... 00:07:31.708 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:31.708 11:31:00 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:31.708 11:31:00 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2195065 00:07:31.708 11:31:00 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:31.708 11:31:00 -- app/cmdline.sh@18 -- # waitforlisten 2195065 00:07:31.708 11:31:00 -- common/autotest_common.sh@819 -- # '[' -z 2195065 ']' 00:07:31.708 11:31:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.708 11:31:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:31.708 11:31:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.708 11:31:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:31.708 11:31:00 -- common/autotest_common.sh@10 -- # set +x 00:07:31.708 [2024-07-21 11:31:01.029130] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:31.708 [2024-07-21 11:31:01.029188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2195065 ] 00:07:31.708 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.708 [2024-07-21 11:31:01.111573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.966 [2024-07-21 11:31:01.149389] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:31.966 [2024-07-21 11:31:01.149505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.533 11:31:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:32.533 11:31:01 -- common/autotest_common.sh@852 -- # return 0 00:07:32.533 11:31:01 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:32.791 { 00:07:32.791 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:07:32.791 "fields": { 00:07:32.791 "major": 24, 00:07:32.791 "minor": 1, 00:07:32.791 "patch": 1, 00:07:32.791 "suffix": "-pre", 00:07:32.791 "commit": "4b94202c6" 00:07:32.791 } 00:07:32.791 } 00:07:32.791 11:31:01 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:32.791 11:31:01 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:32.791 11:31:01 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:32.791 11:31:01 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:32.791 11:31:01 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:32.791 11:31:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:32.791 11:31:01 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:32.791 11:31:01 -- common/autotest_common.sh@10 -- # set +x 00:07:32.791 11:31:01 -- app/cmdline.sh@26 -- # sort 00:07:32.791 11:31:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:32.791 11:31:02 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:32.791 11:31:02 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:32.791 11:31:02 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:32.791 11:31:02 -- common/autotest_common.sh@640 -- # local es=0 00:07:32.791 11:31:02 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:32.791 11:31:02 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:32.791 11:31:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:32.791 11:31:02 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:32.791 11:31:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:32.791 11:31:02 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:32.791 11:31:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:32.792 11:31:02 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:32.792 11:31:02 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:32.792 11:31:02 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:32.792 request: 00:07:32.792 { 00:07:32.792 "method": "env_dpdk_get_mem_stats", 00:07:32.792 "req_id": 1 00:07:32.792 } 00:07:32.792 Got JSON-RPC error response 00:07:32.792 response: 00:07:32.792 { 00:07:32.792 "code": -32601, 00:07:32.792 "message": "Method not found" 00:07:32.792 } 00:07:32.792 11:31:02 -- common/autotest_common.sh@643 -- # es=1 00:07:32.792 11:31:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:32.792 11:31:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:32.792 11:31:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:32.792 11:31:02 -- app/cmdline.sh@1 -- # killprocess 2195065 00:07:32.792 11:31:02 -- common/autotest_common.sh@926 -- # '[' -z 2195065 ']' 00:07:32.792 11:31:02 -- common/autotest_common.sh@930 -- # kill -0 2195065 00:07:32.792 11:31:02 -- common/autotest_common.sh@931 -- # uname 00:07:32.792 11:31:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:32.792 11:31:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2195065 00:07:33.049 11:31:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:33.049 11:31:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:33.049 11:31:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2195065' 00:07:33.049 killing process with pid 2195065 00:07:33.049 11:31:02 -- common/autotest_common.sh@945 -- # kill 2195065 00:07:33.049 11:31:02 -- common/autotest_common.sh@950 -- # wait 2195065 00:07:33.308 00:07:33.308 real 0m1.667s 00:07:33.308 user 0m1.904s 00:07:33.308 sys 0m0.508s 00:07:33.308 11:31:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.308 11:31:02 -- common/autotest_common.sh@10 -- # set +x 00:07:33.308 ************************************ 00:07:33.308 END TEST app_cmdline 00:07:33.308 ************************************ 00:07:33.308 11:31:02 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:33.308 11:31:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:33.308 11:31:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.308 11:31:02 -- common/autotest_common.sh@10 -- # set +x 00:07:33.308 ************************************ 00:07:33.308 START TEST version 00:07:33.308 ************************************ 00:07:33.308 11:31:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:33.308 * Looking for test storage... 00:07:33.308 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:33.308 11:31:02 -- app/version.sh@17 -- # get_header_version major 00:07:33.308 11:31:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:33.308 11:31:02 -- app/version.sh@14 -- # cut -f2 00:07:33.308 11:31:02 -- app/version.sh@14 -- # tr -d '"' 00:07:33.308 11:31:02 -- app/version.sh@17 -- # major=24 00:07:33.308 11:31:02 -- app/version.sh@18 -- # get_header_version minor 00:07:33.308 11:31:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:33.308 11:31:02 -- app/version.sh@14 -- # cut -f2 00:07:33.308 11:31:02 -- app/version.sh@14 -- # tr -d '"' 00:07:33.308 11:31:02 -- app/version.sh@18 -- # minor=1 00:07:33.308 11:31:02 -- app/version.sh@19 -- # get_header_version patch 00:07:33.308 11:31:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:33.308 11:31:02 -- app/version.sh@14 -- # cut -f2 00:07:33.308 11:31:02 -- app/version.sh@14 -- # tr -d '"' 00:07:33.308 11:31:02 -- app/version.sh@19 -- # patch=1 00:07:33.308 11:31:02 -- app/version.sh@20 -- # get_header_version suffix 00:07:33.308 11:31:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:33.308 11:31:02 -- app/version.sh@14 -- # cut -f2 00:07:33.308 11:31:02 -- app/version.sh@14 -- # tr -d '"' 00:07:33.308 11:31:02 -- app/version.sh@20 -- # suffix=-pre 00:07:33.308 11:31:02 -- app/version.sh@22 -- # version=24.1 00:07:33.308 11:31:02 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:33.308 11:31:02 -- app/version.sh@25 -- # version=24.1.1 00:07:33.308 11:31:02 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:33.308 11:31:02 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:33.566 11:31:02 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:33.566 11:31:02 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:33.566 11:31:02 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:33.566 00:07:33.567 real 0m0.176s 00:07:33.567 user 0m0.101s 00:07:33.567 sys 0m0.123s 00:07:33.567 11:31:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.567 11:31:02 -- common/autotest_common.sh@10 -- # set +x 00:07:33.567 ************************************ 00:07:33.567 END TEST version 00:07:33.567 ************************************ 00:07:33.567 11:31:02 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:33.567 11:31:02 -- spdk/autotest.sh@204 -- # uname -s 00:07:33.567 11:31:02 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:33.567 11:31:02 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:33.567 11:31:02 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:33.567 11:31:02 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:33.567 11:31:02 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:33.567 11:31:02 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:33.567 11:31:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:33.567 11:31:02 -- common/autotest_common.sh@10 -- # set +x 00:07:33.567 11:31:02 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:33.567 11:31:02 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:33.567 11:31:02 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:33.567 11:31:02 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:33.567 11:31:02 -- spdk/autotest.sh@291 -- # '[' rdma = rdma ']' 00:07:33.567 11:31:02 -- spdk/autotest.sh@292 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:33.567 11:31:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:33.567 11:31:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.567 11:31:02 -- common/autotest_common.sh@10 -- # set +x 00:07:33.567 ************************************ 00:07:33.567 START TEST nvmf_rdma 00:07:33.567 ************************************ 00:07:33.567 11:31:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:33.567 * Looking for test storage... 00:07:33.567 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:33.567 11:31:02 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:33.567 11:31:02 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:33.567 11:31:02 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.567 11:31:02 -- nvmf/common.sh@7 -- # uname -s 00:07:33.824 11:31:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.824 11:31:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.824 11:31:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.824 11:31:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.824 11:31:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.824 11:31:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.824 11:31:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.824 11:31:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.824 11:31:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.824 11:31:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.824 11:31:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:33.824 11:31:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:33.824 11:31:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.824 11:31:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.824 11:31:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.824 11:31:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:33.824 11:31:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.824 11:31:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.824 11:31:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.824 11:31:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.824 11:31:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.824 11:31:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.824 11:31:03 -- paths/export.sh@5 -- # export PATH 00:07:33.824 11:31:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.824 11:31:03 -- nvmf/common.sh@46 -- # : 0 00:07:33.824 11:31:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:33.824 11:31:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:33.824 11:31:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:33.824 11:31:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.824 11:31:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.824 11:31:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:33.824 11:31:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:33.824 11:31:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:33.824 11:31:03 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:33.824 11:31:03 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:33.824 11:31:03 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:33.824 11:31:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:33.824 11:31:03 -- common/autotest_common.sh@10 -- # set +x 00:07:33.824 11:31:03 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:33.824 11:31:03 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:33.824 11:31:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:33.824 11:31:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.824 11:31:03 -- common/autotest_common.sh@10 -- # set +x 00:07:33.824 ************************************ 00:07:33.824 START TEST nvmf_example 00:07:33.824 ************************************ 00:07:33.824 11:31:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:33.824 * Looking for test storage... 00:07:33.824 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:33.824 11:31:03 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.824 11:31:03 -- nvmf/common.sh@7 -- # uname -s 00:07:33.824 11:31:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.824 11:31:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.824 11:31:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.824 11:31:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.824 11:31:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.824 11:31:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.824 11:31:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.824 11:31:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.824 11:31:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.824 11:31:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.824 11:31:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:33.824 11:31:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:33.824 11:31:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.824 11:31:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.824 11:31:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.824 11:31:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:33.824 11:31:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.824 11:31:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.824 11:31:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.824 11:31:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.824 11:31:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.824 11:31:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.824 11:31:03 -- paths/export.sh@5 -- # export PATH 00:07:33.824 11:31:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.824 11:31:03 -- nvmf/common.sh@46 -- # : 0 00:07:33.824 11:31:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:33.824 11:31:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:33.824 11:31:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:33.824 11:31:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.824 11:31:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.824 11:31:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:33.824 11:31:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:33.824 11:31:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:33.824 11:31:03 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:33.824 11:31:03 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:33.824 11:31:03 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:33.824 11:31:03 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:33.824 11:31:03 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:33.824 11:31:03 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:33.824 11:31:03 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:33.824 11:31:03 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:33.824 11:31:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:33.824 11:31:03 -- common/autotest_common.sh@10 -- # set +x 00:07:33.824 11:31:03 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:33.824 11:31:03 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:07:33.824 11:31:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.824 11:31:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:33.824 11:31:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:33.824 11:31:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:33.824 11:31:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.824 11:31:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.824 11:31:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.824 11:31:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:33.824 11:31:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:33.824 11:31:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:33.824 11:31:03 -- common/autotest_common.sh@10 -- # set +x 00:07:41.946 11:31:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:41.946 11:31:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:41.946 11:31:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:41.946 11:31:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:41.946 11:31:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:41.946 11:31:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:41.946 11:31:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:41.946 11:31:11 -- nvmf/common.sh@294 -- # net_devs=() 00:07:41.946 11:31:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:41.946 11:31:11 -- nvmf/common.sh@295 -- # e810=() 00:07:41.946 11:31:11 -- nvmf/common.sh@295 -- # local -ga e810 00:07:41.946 11:31:11 -- nvmf/common.sh@296 -- # x722=() 00:07:41.946 11:31:11 -- nvmf/common.sh@296 -- # local -ga x722 00:07:41.946 11:31:11 -- nvmf/common.sh@297 -- # mlx=() 00:07:41.946 11:31:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:41.946 11:31:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.946 11:31:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.946 11:31:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.946 11:31:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.946 11:31:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.946 11:31:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.946 11:31:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.946 11:31:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.946 11:31:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.946 11:31:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.946 11:31:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.946 11:31:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:41.946 11:31:11 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:07:41.946 11:31:11 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:07:41.946 11:31:11 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:07:41.946 11:31:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:41.946 11:31:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:41.946 11:31:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:41.946 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:41.946 11:31:11 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:41.946 11:31:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:41.946 11:31:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:41.946 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:41.946 11:31:11 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:41.946 11:31:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:41.946 11:31:11 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:41.946 11:31:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.946 11:31:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:41.946 11:31:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.946 11:31:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:41.946 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:41.946 11:31:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.946 11:31:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:41.946 11:31:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.946 11:31:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:41.946 11:31:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.946 11:31:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:41.946 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:41.946 11:31:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.946 11:31:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:41.946 11:31:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:41.946 11:31:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@408 -- # rdma_device_init 00:07:41.946 11:31:11 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:07:41.946 11:31:11 -- nvmf/common.sh@57 -- # uname 00:07:41.946 11:31:11 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:07:41.946 11:31:11 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:07:41.946 11:31:11 -- nvmf/common.sh@62 -- # modprobe ib_core 00:07:41.946 11:31:11 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:07:41.946 11:31:11 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:07:41.946 11:31:11 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:07:41.946 11:31:11 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:07:41.946 11:31:11 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:07:41.946 11:31:11 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:07:41.946 11:31:11 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:41.946 11:31:11 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:07:41.946 11:31:11 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:41.946 11:31:11 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:41.946 11:31:11 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:41.946 11:31:11 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:41.946 11:31:11 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:41.946 11:31:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:41.946 11:31:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:41.946 11:31:11 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:07:41.946 11:31:11 -- nvmf/common.sh@104 -- # continue 2 00:07:41.946 11:31:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:41.946 11:31:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:41.946 11:31:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:41.946 11:31:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:41.946 11:31:11 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:07:41.946 11:31:11 -- nvmf/common.sh@104 -- # continue 2 00:07:41.946 11:31:11 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:41.946 11:31:11 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:07:41.946 11:31:11 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:07:41.946 11:31:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:07:41.946 11:31:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:41.946 11:31:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:41.946 11:31:11 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:07:41.946 11:31:11 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:07:41.947 11:31:11 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:07:42.205 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:42.205 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:42.205 altname enp217s0f0np0 00:07:42.205 altname ens818f0np0 00:07:42.205 inet 192.168.100.8/24 scope global mlx_0_0 00:07:42.205 valid_lft forever preferred_lft forever 00:07:42.205 11:31:11 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:42.205 11:31:11 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:07:42.205 11:31:11 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:07:42.205 11:31:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:07:42.205 11:31:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:42.205 11:31:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:42.205 11:31:11 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:07:42.205 11:31:11 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:07:42.205 11:31:11 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:07:42.205 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:42.205 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:42.205 altname enp217s0f1np1 00:07:42.205 altname ens818f1np1 00:07:42.205 inet 192.168.100.9/24 scope global mlx_0_1 00:07:42.205 valid_lft forever preferred_lft forever 00:07:42.205 11:31:11 -- nvmf/common.sh@410 -- # return 0 00:07:42.205 11:31:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:42.205 11:31:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:42.205 11:31:11 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:07:42.205 11:31:11 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:07:42.205 11:31:11 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:07:42.205 11:31:11 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:42.205 11:31:11 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:42.205 11:31:11 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:42.205 11:31:11 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:42.205 11:31:11 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:42.205 11:31:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:42.205 11:31:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.205 11:31:11 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:42.205 11:31:11 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:07:42.205 11:31:11 -- nvmf/common.sh@104 -- # continue 2 00:07:42.205 11:31:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:42.205 11:31:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.205 11:31:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:42.205 11:31:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.205 11:31:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:42.205 11:31:11 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:07:42.205 11:31:11 -- nvmf/common.sh@104 -- # continue 2 00:07:42.205 11:31:11 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:42.205 11:31:11 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:07:42.205 11:31:11 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:07:42.205 11:31:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:07:42.205 11:31:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:42.205 11:31:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:42.205 11:31:11 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:42.205 11:31:11 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:07:42.205 11:31:11 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:07:42.205 11:31:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:07:42.205 11:31:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:42.205 11:31:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:42.205 11:31:11 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:07:42.205 192.168.100.9' 00:07:42.205 11:31:11 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:07:42.205 192.168.100.9' 00:07:42.205 11:31:11 -- nvmf/common.sh@445 -- # head -n 1 00:07:42.205 11:31:11 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:42.205 11:31:11 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:42.205 192.168.100.9' 00:07:42.205 11:31:11 -- nvmf/common.sh@446 -- # head -n 1 00:07:42.205 11:31:11 -- nvmf/common.sh@446 -- # tail -n +2 00:07:42.206 11:31:11 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:42.206 11:31:11 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:07:42.206 11:31:11 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:42.206 11:31:11 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:07:42.206 11:31:11 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:07:42.206 11:31:11 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:07:42.206 11:31:11 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:42.206 11:31:11 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:42.206 11:31:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:42.206 11:31:11 -- common/autotest_common.sh@10 -- # set +x 00:07:42.206 11:31:11 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:42.206 11:31:11 -- target/nvmf_example.sh@34 -- # nvmfpid=2199635 00:07:42.206 11:31:11 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:42.206 11:31:11 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:42.206 11:31:11 -- target/nvmf_example.sh@36 -- # waitforlisten 2199635 00:07:42.206 11:31:11 -- common/autotest_common.sh@819 -- # '[' -z 2199635 ']' 00:07:42.206 11:31:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.206 11:31:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:42.206 11:31:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.206 11:31:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:42.206 11:31:11 -- common/autotest_common.sh@10 -- # set +x 00:07:42.206 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.143 11:31:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:43.143 11:31:12 -- common/autotest_common.sh@852 -- # return 0 00:07:43.143 11:31:12 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:43.143 11:31:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:43.143 11:31:12 -- common/autotest_common.sh@10 -- # set +x 00:07:43.143 11:31:12 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:43.143 11:31:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.143 11:31:12 -- common/autotest_common.sh@10 -- # set +x 00:07:43.402 11:31:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.402 11:31:12 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:43.402 11:31:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.402 11:31:12 -- common/autotest_common.sh@10 -- # set +x 00:07:43.402 11:31:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.402 11:31:12 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:43.402 11:31:12 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:43.402 11:31:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.402 11:31:12 -- common/autotest_common.sh@10 -- # set +x 00:07:43.402 11:31:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.402 11:31:12 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:43.402 11:31:12 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:43.402 11:31:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.402 11:31:12 -- common/autotest_common.sh@10 -- # set +x 00:07:43.402 11:31:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.402 11:31:12 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:43.402 11:31:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.402 11:31:12 -- common/autotest_common.sh@10 -- # set +x 00:07:43.402 11:31:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.402 11:31:12 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:43.402 11:31:12 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:43.402 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.603 Initializing NVMe Controllers 00:07:55.603 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:55.603 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:55.603 Initialization complete. Launching workers. 00:07:55.603 ======================================================== 00:07:55.603 Latency(us) 00:07:55.603 Device Information : IOPS MiB/s Average min max 00:07:55.603 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26737.10 104.44 2394.75 592.10 12082.84 00:07:55.603 ======================================================== 00:07:55.603 Total : 26737.10 104.44 2394.75 592.10 12082.84 00:07:55.603 00:07:55.603 11:31:23 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:55.603 11:31:23 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:55.603 11:31:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:55.603 11:31:23 -- nvmf/common.sh@116 -- # sync 00:07:55.603 11:31:23 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:07:55.603 11:31:23 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:07:55.603 11:31:23 -- nvmf/common.sh@119 -- # set +e 00:07:55.603 11:31:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:55.603 11:31:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:07:55.603 rmmod nvme_rdma 00:07:55.603 rmmod nvme_fabrics 00:07:55.603 11:31:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:55.603 11:31:23 -- nvmf/common.sh@123 -- # set -e 00:07:55.603 11:31:23 -- nvmf/common.sh@124 -- # return 0 00:07:55.603 11:31:23 -- nvmf/common.sh@477 -- # '[' -n 2199635 ']' 00:07:55.603 11:31:23 -- nvmf/common.sh@478 -- # killprocess 2199635 00:07:55.603 11:31:23 -- common/autotest_common.sh@926 -- # '[' -z 2199635 ']' 00:07:55.603 11:31:23 -- common/autotest_common.sh@930 -- # kill -0 2199635 00:07:55.603 11:31:23 -- common/autotest_common.sh@931 -- # uname 00:07:55.603 11:31:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:55.603 11:31:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2199635 00:07:55.603 11:31:23 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:07:55.603 11:31:23 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:07:55.603 11:31:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2199635' 00:07:55.603 killing process with pid 2199635 00:07:55.603 11:31:23 -- common/autotest_common.sh@945 -- # kill 2199635 00:07:55.603 11:31:23 -- common/autotest_common.sh@950 -- # wait 2199635 00:07:55.603 nvmf threads initialize successfully 00:07:55.603 bdev subsystem init successfully 00:07:55.603 created a nvmf target service 00:07:55.603 create targets's poll groups done 00:07:55.603 all subsystems of target started 00:07:55.603 nvmf target is running 00:07:55.603 all subsystems of target stopped 00:07:55.603 destroy targets's poll groups done 00:07:55.603 destroyed the nvmf target service 00:07:55.603 bdev subsystem finish successfully 00:07:55.603 nvmf threads destroy successfully 00:07:55.603 11:31:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:55.603 11:31:24 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:07:55.603 11:31:24 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:55.603 11:31:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:55.603 11:31:24 -- common/autotest_common.sh@10 -- # set +x 00:07:55.603 00:07:55.603 real 0m21.231s 00:07:55.603 user 0m52.612s 00:07:55.603 sys 0m6.759s 00:07:55.603 11:31:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.603 11:31:24 -- common/autotest_common.sh@10 -- # set +x 00:07:55.603 ************************************ 00:07:55.603 END TEST nvmf_example 00:07:55.603 ************************************ 00:07:55.603 11:31:24 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:55.603 11:31:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:55.603 11:31:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.603 11:31:24 -- common/autotest_common.sh@10 -- # set +x 00:07:55.603 ************************************ 00:07:55.603 START TEST nvmf_filesystem 00:07:55.603 ************************************ 00:07:55.603 11:31:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:55.603 * Looking for test storage... 00:07:55.603 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:55.603 11:31:24 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:07:55.603 11:31:24 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:55.603 11:31:24 -- common/autotest_common.sh@34 -- # set -e 00:07:55.603 11:31:24 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:55.603 11:31:24 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:55.603 11:31:24 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:55.603 11:31:24 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:07:55.603 11:31:24 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:55.603 11:31:24 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:55.603 11:31:24 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:55.603 11:31:24 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:55.603 11:31:24 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:55.603 11:31:24 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:55.603 11:31:24 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:55.603 11:31:24 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:55.603 11:31:24 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:55.603 11:31:24 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:55.603 11:31:24 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:55.603 11:31:24 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:55.603 11:31:24 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:55.603 11:31:24 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:55.603 11:31:24 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:55.603 11:31:24 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:55.603 11:31:24 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:55.603 11:31:24 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:55.603 11:31:24 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:55.603 11:31:24 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:55.603 11:31:24 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:55.603 11:31:24 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:55.603 11:31:24 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:55.603 11:31:24 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:55.603 11:31:24 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:55.603 11:31:24 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:55.603 11:31:24 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:55.603 11:31:24 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:55.603 11:31:24 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:55.603 11:31:24 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:55.603 11:31:24 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:55.603 11:31:24 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:55.603 11:31:24 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:55.603 11:31:24 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:55.603 11:31:24 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:55.603 11:31:24 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:07:55.603 11:31:24 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:55.603 11:31:24 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:55.603 11:31:24 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:55.603 11:31:24 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:55.603 11:31:24 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:55.603 11:31:24 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:55.603 11:31:24 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:55.603 11:31:24 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:55.603 11:31:24 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:55.603 11:31:24 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:55.603 11:31:24 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:55.603 11:31:24 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:55.603 11:31:24 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:55.603 11:31:24 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:55.603 11:31:24 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:55.603 11:31:24 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:55.603 11:31:24 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:55.604 11:31:24 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:55.604 11:31:24 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:55.604 11:31:24 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:55.604 11:31:24 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:55.604 11:31:24 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:55.604 11:31:24 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:55.604 11:31:24 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:55.604 11:31:24 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:55.604 11:31:24 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:55.604 11:31:24 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:55.604 11:31:24 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:55.604 11:31:24 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:55.604 11:31:24 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:55.604 11:31:24 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:55.604 11:31:24 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:55.604 11:31:24 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:55.604 11:31:24 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:55.604 11:31:24 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:55.604 11:31:24 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:55.604 11:31:24 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:55.604 11:31:24 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:55.604 11:31:24 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:55.604 11:31:24 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:55.604 11:31:24 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:55.604 11:31:24 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:55.604 11:31:24 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:55.604 11:31:24 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:55.604 11:31:24 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:55.604 11:31:24 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:55.604 11:31:24 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:55.604 11:31:24 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:55.604 11:31:24 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:55.604 11:31:24 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:55.604 11:31:24 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:55.604 11:31:24 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:55.604 11:31:24 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:55.604 11:31:24 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:55.604 11:31:24 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:55.604 11:31:24 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:55.604 11:31:24 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:55.604 11:31:24 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:07:55.604 11:31:24 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:55.604 #define SPDK_CONFIG_H 00:07:55.604 #define SPDK_CONFIG_APPS 1 00:07:55.604 #define SPDK_CONFIG_ARCH native 00:07:55.604 #undef SPDK_CONFIG_ASAN 00:07:55.604 #undef SPDK_CONFIG_AVAHI 00:07:55.604 #undef SPDK_CONFIG_CET 00:07:55.604 #define SPDK_CONFIG_COVERAGE 1 00:07:55.604 #define SPDK_CONFIG_CROSS_PREFIX 00:07:55.604 #undef SPDK_CONFIG_CRYPTO 00:07:55.604 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:55.604 #undef SPDK_CONFIG_CUSTOMOCF 00:07:55.604 #undef SPDK_CONFIG_DAOS 00:07:55.604 #define SPDK_CONFIG_DAOS_DIR 00:07:55.604 #define SPDK_CONFIG_DEBUG 1 00:07:55.604 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:55.604 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:07:55.604 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:55.604 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:55.604 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:55.604 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:55.604 #define SPDK_CONFIG_EXAMPLES 1 00:07:55.604 #undef SPDK_CONFIG_FC 00:07:55.604 #define SPDK_CONFIG_FC_PATH 00:07:55.604 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:55.604 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:55.604 #undef SPDK_CONFIG_FUSE 00:07:55.604 #undef SPDK_CONFIG_FUZZER 00:07:55.604 #define SPDK_CONFIG_FUZZER_LIB 00:07:55.604 #undef SPDK_CONFIG_GOLANG 00:07:55.604 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:55.604 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:55.604 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:55.604 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:55.604 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:55.604 #define SPDK_CONFIG_IDXD 1 00:07:55.604 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:55.604 #undef SPDK_CONFIG_IPSEC_MB 00:07:55.604 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:55.604 #define SPDK_CONFIG_ISAL 1 00:07:55.604 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:55.604 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:55.604 #define SPDK_CONFIG_LIBDIR 00:07:55.604 #undef SPDK_CONFIG_LTO 00:07:55.604 #define SPDK_CONFIG_MAX_LCORES 00:07:55.604 #define SPDK_CONFIG_NVME_CUSE 1 00:07:55.604 #undef SPDK_CONFIG_OCF 00:07:55.604 #define SPDK_CONFIG_OCF_PATH 00:07:55.604 #define SPDK_CONFIG_OPENSSL_PATH 00:07:55.604 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:55.604 #undef SPDK_CONFIG_PGO_USE 00:07:55.604 #define SPDK_CONFIG_PREFIX /usr/local 00:07:55.604 #undef SPDK_CONFIG_RAID5F 00:07:55.604 #undef SPDK_CONFIG_RBD 00:07:55.604 #define SPDK_CONFIG_RDMA 1 00:07:55.604 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:55.604 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:55.604 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:55.604 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:55.604 #define SPDK_CONFIG_SHARED 1 00:07:55.604 #undef SPDK_CONFIG_SMA 00:07:55.604 #define SPDK_CONFIG_TESTS 1 00:07:55.604 #undef SPDK_CONFIG_TSAN 00:07:55.604 #define SPDK_CONFIG_UBLK 1 00:07:55.604 #define SPDK_CONFIG_UBSAN 1 00:07:55.604 #undef SPDK_CONFIG_UNIT_TESTS 00:07:55.604 #undef SPDK_CONFIG_URING 00:07:55.604 #define SPDK_CONFIG_URING_PATH 00:07:55.604 #undef SPDK_CONFIG_URING_ZNS 00:07:55.604 #undef SPDK_CONFIG_USDT 00:07:55.604 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:55.604 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:55.604 #undef SPDK_CONFIG_VFIO_USER 00:07:55.604 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:55.604 #define SPDK_CONFIG_VHOST 1 00:07:55.604 #define SPDK_CONFIG_VIRTIO 1 00:07:55.604 #undef SPDK_CONFIG_VTUNE 00:07:55.604 #define SPDK_CONFIG_VTUNE_DIR 00:07:55.604 #define SPDK_CONFIG_WERROR 1 00:07:55.604 #define SPDK_CONFIG_WPDK_DIR 00:07:55.604 #undef SPDK_CONFIG_XNVME 00:07:55.604 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:55.604 11:31:24 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:55.604 11:31:24 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:55.604 11:31:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.604 11:31:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.604 11:31:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.604 11:31:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.604 11:31:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.604 11:31:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.604 11:31:24 -- paths/export.sh@5 -- # export PATH 00:07:55.604 11:31:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.604 11:31:24 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:55.604 11:31:24 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:55.604 11:31:24 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:55.604 11:31:24 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:55.604 11:31:24 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:55.604 11:31:24 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:55.604 11:31:24 -- pm/common@16 -- # TEST_TAG=N/A 00:07:55.604 11:31:24 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:07:55.604 11:31:24 -- common/autotest_common.sh@52 -- # : 1 00:07:55.604 11:31:24 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:55.604 11:31:24 -- common/autotest_common.sh@56 -- # : 0 00:07:55.604 11:31:24 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:55.604 11:31:24 -- common/autotest_common.sh@58 -- # : 0 00:07:55.604 11:31:24 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:55.604 11:31:24 -- common/autotest_common.sh@60 -- # : 1 00:07:55.604 11:31:24 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:55.604 11:31:24 -- common/autotest_common.sh@62 -- # : 0 00:07:55.604 11:31:24 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:55.604 11:31:24 -- common/autotest_common.sh@64 -- # : 00:07:55.604 11:31:24 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:55.604 11:31:24 -- common/autotest_common.sh@66 -- # : 0 00:07:55.604 11:31:24 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:55.604 11:31:24 -- common/autotest_common.sh@68 -- # : 0 00:07:55.604 11:31:24 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:55.604 11:31:24 -- common/autotest_common.sh@70 -- # : 0 00:07:55.604 11:31:24 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:55.604 11:31:24 -- common/autotest_common.sh@72 -- # : 0 00:07:55.604 11:31:24 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:55.604 11:31:24 -- common/autotest_common.sh@74 -- # : 0 00:07:55.604 11:31:24 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:55.604 11:31:24 -- common/autotest_common.sh@76 -- # : 0 00:07:55.604 11:31:24 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:55.604 11:31:24 -- common/autotest_common.sh@78 -- # : 0 00:07:55.604 11:31:24 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:55.604 11:31:24 -- common/autotest_common.sh@80 -- # : 1 00:07:55.604 11:31:24 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:55.604 11:31:24 -- common/autotest_common.sh@82 -- # : 0 00:07:55.604 11:31:24 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:55.604 11:31:24 -- common/autotest_common.sh@84 -- # : 0 00:07:55.604 11:31:24 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:55.604 11:31:24 -- common/autotest_common.sh@86 -- # : 1 00:07:55.604 11:31:24 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:55.604 11:31:24 -- common/autotest_common.sh@88 -- # : 0 00:07:55.604 11:31:24 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:55.604 11:31:24 -- common/autotest_common.sh@90 -- # : 0 00:07:55.604 11:31:24 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:55.604 11:31:24 -- common/autotest_common.sh@92 -- # : 0 00:07:55.604 11:31:24 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:55.604 11:31:24 -- common/autotest_common.sh@94 -- # : 0 00:07:55.604 11:31:24 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:55.604 11:31:24 -- common/autotest_common.sh@96 -- # : rdma 00:07:55.605 11:31:24 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:55.605 11:31:24 -- common/autotest_common.sh@98 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:55.605 11:31:24 -- common/autotest_common.sh@100 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:55.605 11:31:24 -- common/autotest_common.sh@102 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:55.605 11:31:24 -- common/autotest_common.sh@104 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:55.605 11:31:24 -- common/autotest_common.sh@106 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:55.605 11:31:24 -- common/autotest_common.sh@108 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:55.605 11:31:24 -- common/autotest_common.sh@110 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:55.605 11:31:24 -- common/autotest_common.sh@112 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:55.605 11:31:24 -- common/autotest_common.sh@114 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:55.605 11:31:24 -- common/autotest_common.sh@116 -- # : 1 00:07:55.605 11:31:24 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:55.605 11:31:24 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:07:55.605 11:31:24 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:55.605 11:31:24 -- common/autotest_common.sh@120 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:55.605 11:31:24 -- common/autotest_common.sh@122 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:55.605 11:31:24 -- common/autotest_common.sh@124 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:55.605 11:31:24 -- common/autotest_common.sh@126 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:55.605 11:31:24 -- common/autotest_common.sh@128 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:55.605 11:31:24 -- common/autotest_common.sh@130 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:55.605 11:31:24 -- common/autotest_common.sh@132 -- # : v23.11 00:07:55.605 11:31:24 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:55.605 11:31:24 -- common/autotest_common.sh@134 -- # : true 00:07:55.605 11:31:24 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:55.605 11:31:24 -- common/autotest_common.sh@136 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:55.605 11:31:24 -- common/autotest_common.sh@138 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:55.605 11:31:24 -- common/autotest_common.sh@140 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:55.605 11:31:24 -- common/autotest_common.sh@142 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:55.605 11:31:24 -- common/autotest_common.sh@144 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:55.605 11:31:24 -- common/autotest_common.sh@146 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:55.605 11:31:24 -- common/autotest_common.sh@148 -- # : mlx5 00:07:55.605 11:31:24 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:55.605 11:31:24 -- common/autotest_common.sh@150 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:55.605 11:31:24 -- common/autotest_common.sh@152 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:55.605 11:31:24 -- common/autotest_common.sh@154 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:55.605 11:31:24 -- common/autotest_common.sh@156 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:55.605 11:31:24 -- common/autotest_common.sh@158 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:55.605 11:31:24 -- common/autotest_common.sh@160 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:55.605 11:31:24 -- common/autotest_common.sh@163 -- # : 00:07:55.605 11:31:24 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:55.605 11:31:24 -- common/autotest_common.sh@165 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:55.605 11:31:24 -- common/autotest_common.sh@167 -- # : 0 00:07:55.605 11:31:24 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:55.605 11:31:24 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:55.605 11:31:24 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:55.605 11:31:24 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:55.605 11:31:24 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:55.605 11:31:24 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:55.605 11:31:24 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:55.605 11:31:24 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:55.605 11:31:24 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:55.605 11:31:24 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:55.605 11:31:24 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:55.605 11:31:24 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:55.605 11:31:24 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:55.605 11:31:24 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:55.605 11:31:24 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:55.605 11:31:24 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:55.605 11:31:24 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:55.605 11:31:24 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:55.605 11:31:24 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:55.605 11:31:24 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:55.605 11:31:24 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:55.605 11:31:24 -- common/autotest_common.sh@196 -- # cat 00:07:55.605 11:31:24 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:55.605 11:31:24 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:55.605 11:31:24 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:55.605 11:31:24 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:55.605 11:31:24 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:55.605 11:31:24 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:55.605 11:31:24 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:55.605 11:31:24 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:55.605 11:31:24 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:55.605 11:31:24 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:55.605 11:31:24 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:55.605 11:31:24 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:55.605 11:31:24 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:55.605 11:31:24 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:55.605 11:31:24 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:55.605 11:31:24 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:55.605 11:31:24 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:55.605 11:31:24 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:55.605 11:31:24 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:55.605 11:31:24 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:55.605 11:31:24 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:55.605 11:31:24 -- common/autotest_common.sh@249 -- # valgrind= 00:07:55.605 11:31:24 -- common/autotest_common.sh@255 -- # uname -s 00:07:55.605 11:31:24 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:55.605 11:31:24 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:55.605 11:31:24 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:55.605 11:31:24 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:55.605 11:31:24 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:55.605 11:31:24 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:55.605 11:31:24 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:55.605 11:31:24 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j112 00:07:55.605 11:31:24 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:55.605 11:31:24 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:55.605 11:31:24 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:07:55.605 11:31:24 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:55.605 11:31:24 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:55.605 11:31:24 -- common/autotest_common.sh@291 -- # for i in "$@" 00:07:55.605 11:31:24 -- common/autotest_common.sh@292 -- # case "$i" in 00:07:55.605 11:31:24 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=rdma 00:07:55.605 11:31:24 -- common/autotest_common.sh@309 -- # [[ -z 2201880 ]] 00:07:55.605 11:31:24 -- common/autotest_common.sh@309 -- # kill -0 2201880 00:07:55.605 11:31:24 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:55.605 11:31:24 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:55.605 11:31:24 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:55.605 11:31:24 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:55.605 11:31:24 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:55.605 11:31:24 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:55.605 11:31:24 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:55.605 11:31:24 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:55.605 11:31:24 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.7XpJsD 00:07:55.605 11:31:24 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:55.605 11:31:24 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:55.605 11:31:24 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:55.605 11:31:24 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.7XpJsD/tests/target /tmp/spdk.7XpJsD 00:07:55.605 11:31:24 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:55.605 11:31:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:55.606 11:31:24 -- common/autotest_common.sh@318 -- # df -T 00:07:55.606 11:31:24 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:55.606 11:31:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:07:55.606 11:31:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:55.606 11:31:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:07:55.606 11:31:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:07:55.606 11:31:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:55.606 11:31:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:55.606 11:31:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:07:55.606 11:31:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:07:55.606 11:31:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=951066624 00:07:55.606 11:31:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:07:55.606 11:31:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=4333363200 00:07:55.606 11:31:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:55.606 11:31:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:07:55.606 11:31:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:07:55.606 11:31:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=49463070720 00:07:55.606 11:31:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61742276608 00:07:55.606 11:31:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=12279205888 00:07:55.606 11:31:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:55.606 11:31:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:55.606 11:31:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:55.606 11:31:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=30817619968 00:07:55.606 11:31:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30871138304 00:07:55.606 11:31:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:07:55.606 11:31:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:55.606 11:31:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:55.606 11:31:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:55.606 11:31:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=12338679808 00:07:55.606 11:31:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12348456960 00:07:55.606 11:31:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=9777152 00:07:55.606 11:31:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:55.606 11:31:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:55.606 11:31:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:55.606 11:31:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=30867849216 00:07:55.606 11:31:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30871138304 00:07:55.606 11:31:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=3289088 00:07:55.606 11:31:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:55.606 11:31:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:55.606 11:31:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:55.606 11:31:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=6174220288 00:07:55.606 11:31:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6174224384 00:07:55.606 11:31:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:07:55.606 11:31:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:55.606 11:31:24 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:55.606 * Looking for test storage... 00:07:55.606 11:31:24 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:55.606 11:31:24 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:55.606 11:31:24 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:55.606 11:31:24 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:55.606 11:31:24 -- common/autotest_common.sh@363 -- # mount=/ 00:07:55.606 11:31:24 -- common/autotest_common.sh@365 -- # target_space=49463070720 00:07:55.606 11:31:24 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:55.606 11:31:24 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:55.606 11:31:24 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:07:55.606 11:31:24 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:07:55.606 11:31:24 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:07:55.606 11:31:24 -- common/autotest_common.sh@372 -- # new_size=14493798400 00:07:55.606 11:31:24 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:55.606 11:31:24 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:55.606 11:31:24 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:55.606 11:31:24 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:55.606 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:55.606 11:31:24 -- common/autotest_common.sh@380 -- # return 0 00:07:55.606 11:31:24 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:55.606 11:31:24 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:55.606 11:31:24 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:55.606 11:31:24 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:55.606 11:31:24 -- common/autotest_common.sh@1672 -- # true 00:07:55.606 11:31:24 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:55.606 11:31:24 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:55.606 11:31:24 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:55.606 11:31:24 -- common/autotest_common.sh@27 -- # exec 00:07:55.606 11:31:24 -- common/autotest_common.sh@29 -- # exec 00:07:55.606 11:31:24 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:55.606 11:31:24 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:55.606 11:31:24 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:55.606 11:31:24 -- common/autotest_common.sh@18 -- # set -x 00:07:55.606 11:31:24 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.606 11:31:24 -- nvmf/common.sh@7 -- # uname -s 00:07:55.606 11:31:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.606 11:31:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.606 11:31:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.606 11:31:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.606 11:31:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.606 11:31:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.606 11:31:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.606 11:31:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.606 11:31:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.606 11:31:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.606 11:31:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:55.606 11:31:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:55.606 11:31:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.606 11:31:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.606 11:31:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.606 11:31:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:55.606 11:31:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.606 11:31:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.606 11:31:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.606 11:31:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.606 11:31:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.606 11:31:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.606 11:31:24 -- paths/export.sh@5 -- # export PATH 00:07:55.606 11:31:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.606 11:31:24 -- nvmf/common.sh@46 -- # : 0 00:07:55.606 11:31:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:55.606 11:31:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:55.606 11:31:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:55.606 11:31:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.606 11:31:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.606 11:31:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:55.606 11:31:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:55.606 11:31:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:55.606 11:31:24 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:55.606 11:31:24 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:55.606 11:31:24 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:55.606 11:31:24 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:07:55.606 11:31:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.606 11:31:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:55.606 11:31:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:55.606 11:31:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:55.606 11:31:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.606 11:31:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.606 11:31:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.606 11:31:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:55.606 11:31:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:55.606 11:31:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:55.606 11:31:24 -- common/autotest_common.sh@10 -- # set +x 00:08:03.776 11:31:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:03.776 11:31:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:03.776 11:31:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:03.776 11:31:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:03.776 11:31:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:03.777 11:31:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:03.777 11:31:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:03.777 11:31:32 -- nvmf/common.sh@294 -- # net_devs=() 00:08:03.777 11:31:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:03.777 11:31:32 -- nvmf/common.sh@295 -- # e810=() 00:08:03.777 11:31:32 -- nvmf/common.sh@295 -- # local -ga e810 00:08:03.777 11:31:32 -- nvmf/common.sh@296 -- # x722=() 00:08:03.777 11:31:32 -- nvmf/common.sh@296 -- # local -ga x722 00:08:03.777 11:31:32 -- nvmf/common.sh@297 -- # mlx=() 00:08:03.777 11:31:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:03.777 11:31:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.777 11:31:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.777 11:31:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.777 11:31:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.777 11:31:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.777 11:31:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.777 11:31:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.777 11:31:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.777 11:31:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.777 11:31:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.777 11:31:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.777 11:31:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:03.777 11:31:32 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:03.777 11:31:32 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:03.777 11:31:32 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:03.777 11:31:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:03.777 11:31:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:03.777 11:31:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:03.777 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:03.777 11:31:32 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:03.777 11:31:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:03.777 11:31:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:03.777 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:03.777 11:31:32 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:03.777 11:31:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:03.777 11:31:32 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:03.777 11:31:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.777 11:31:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:03.777 11:31:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.777 11:31:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:03.777 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:03.777 11:31:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.777 11:31:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:03.777 11:31:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.777 11:31:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:03.777 11:31:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.777 11:31:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:03.777 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:03.777 11:31:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.777 11:31:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:03.777 11:31:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:03.777 11:31:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:03.777 11:31:32 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:03.777 11:31:32 -- nvmf/common.sh@57 -- # uname 00:08:03.777 11:31:32 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:03.777 11:31:32 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:03.777 11:31:32 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:03.777 11:31:32 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:03.777 11:31:32 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:03.777 11:31:32 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:03.777 11:31:32 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:03.777 11:31:32 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:03.777 11:31:32 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:03.777 11:31:32 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:03.777 11:31:32 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:03.777 11:31:32 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:03.777 11:31:32 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:03.777 11:31:32 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:03.777 11:31:32 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:03.777 11:31:32 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:03.777 11:31:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:03.777 11:31:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.777 11:31:32 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:03.777 11:31:32 -- nvmf/common.sh@104 -- # continue 2 00:08:03.777 11:31:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:03.777 11:31:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.777 11:31:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.777 11:31:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:03.777 11:31:32 -- nvmf/common.sh@104 -- # continue 2 00:08:03.777 11:31:32 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:03.777 11:31:32 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:03.777 11:31:32 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:03.777 11:31:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:03.777 11:31:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:03.777 11:31:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:03.777 11:31:32 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:03.777 11:31:32 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:03.777 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:03.777 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:03.777 altname enp217s0f0np0 00:08:03.777 altname ens818f0np0 00:08:03.777 inet 192.168.100.8/24 scope global mlx_0_0 00:08:03.777 valid_lft forever preferred_lft forever 00:08:03.777 11:31:32 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:03.777 11:31:32 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:03.777 11:31:32 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:03.777 11:31:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:03.777 11:31:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:03.777 11:31:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:03.777 11:31:32 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:03.777 11:31:32 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:03.777 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:03.777 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:03.777 altname enp217s0f1np1 00:08:03.777 altname ens818f1np1 00:08:03.777 inet 192.168.100.9/24 scope global mlx_0_1 00:08:03.777 valid_lft forever preferred_lft forever 00:08:03.777 11:31:32 -- nvmf/common.sh@410 -- # return 0 00:08:03.777 11:31:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:03.777 11:31:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:03.777 11:31:32 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:03.777 11:31:32 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:03.777 11:31:32 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:03.777 11:31:32 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:03.777 11:31:32 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:03.777 11:31:32 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:03.777 11:31:32 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:03.777 11:31:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:03.777 11:31:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.777 11:31:32 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:03.777 11:31:32 -- nvmf/common.sh@104 -- # continue 2 00:08:03.777 11:31:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:03.777 11:31:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.777 11:31:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.777 11:31:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:03.777 11:31:32 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:03.777 11:31:32 -- nvmf/common.sh@104 -- # continue 2 00:08:03.777 11:31:32 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:03.777 11:31:32 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:03.777 11:31:32 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:03.777 11:31:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:03.777 11:31:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:03.777 11:31:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:03.777 11:31:32 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:03.777 11:31:32 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:03.777 11:31:32 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:03.777 11:31:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:03.777 11:31:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:03.777 11:31:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:03.777 11:31:33 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:03.777 192.168.100.9' 00:08:03.777 11:31:33 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:03.777 192.168.100.9' 00:08:03.777 11:31:33 -- nvmf/common.sh@445 -- # head -n 1 00:08:03.777 11:31:33 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:03.777 11:31:33 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:03.777 192.168.100.9' 00:08:03.777 11:31:33 -- nvmf/common.sh@446 -- # tail -n +2 00:08:03.777 11:31:33 -- nvmf/common.sh@446 -- # head -n 1 00:08:03.777 11:31:33 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:03.777 11:31:33 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:03.777 11:31:33 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:03.777 11:31:33 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:03.777 11:31:33 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:03.777 11:31:33 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:03.777 11:31:33 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:03.777 11:31:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:03.777 11:31:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.777 11:31:33 -- common/autotest_common.sh@10 -- # set +x 00:08:03.777 ************************************ 00:08:03.777 START TEST nvmf_filesystem_no_in_capsule 00:08:03.777 ************************************ 00:08:03.777 11:31:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:03.777 11:31:33 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:03.777 11:31:33 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:03.777 11:31:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:03.777 11:31:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:03.777 11:31:33 -- common/autotest_common.sh@10 -- # set +x 00:08:03.777 11:31:33 -- nvmf/common.sh@469 -- # nvmfpid=2206054 00:08:03.777 11:31:33 -- nvmf/common.sh@470 -- # waitforlisten 2206054 00:08:03.777 11:31:33 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:03.777 11:31:33 -- common/autotest_common.sh@819 -- # '[' -z 2206054 ']' 00:08:03.777 11:31:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.777 11:31:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:03.777 11:31:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.777 11:31:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:03.777 11:31:33 -- common/autotest_common.sh@10 -- # set +x 00:08:03.777 [2024-07-21 11:31:33.115165] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:03.777 [2024-07-21 11:31:33.115223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.777 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.034 [2024-07-21 11:31:33.203080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.034 [2024-07-21 11:31:33.242511] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:04.034 [2024-07-21 11:31:33.242620] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.034 [2024-07-21 11:31:33.242635] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.034 [2024-07-21 11:31:33.242643] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.034 [2024-07-21 11:31:33.242696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.034 [2024-07-21 11:31:33.242784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.034 [2024-07-21 11:31:33.242869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.034 [2024-07-21 11:31:33.242871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.598 11:31:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:04.598 11:31:33 -- common/autotest_common.sh@852 -- # return 0 00:08:04.598 11:31:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:04.598 11:31:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:04.598 11:31:33 -- common/autotest_common.sh@10 -- # set +x 00:08:04.598 11:31:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.598 11:31:33 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:04.598 11:31:33 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:04.598 11:31:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.598 11:31:33 -- common/autotest_common.sh@10 -- # set +x 00:08:04.598 [2024-07-21 11:31:33.967054] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:04.598 [2024-07-21 11:31:33.989356] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7ea4b0/0x7ee9a0) succeed. 00:08:04.598 [2024-07-21 11:31:33.999736] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7ebaa0/0x830030) succeed. 00:08:04.855 11:31:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.855 11:31:34 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:04.855 11:31:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.855 11:31:34 -- common/autotest_common.sh@10 -- # set +x 00:08:04.855 Malloc1 00:08:04.855 11:31:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.855 11:31:34 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:04.855 11:31:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.855 11:31:34 -- common/autotest_common.sh@10 -- # set +x 00:08:04.855 11:31:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.855 11:31:34 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:04.855 11:31:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.855 11:31:34 -- common/autotest_common.sh@10 -- # set +x 00:08:04.855 11:31:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.855 11:31:34 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:04.855 11:31:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.855 11:31:34 -- common/autotest_common.sh@10 -- # set +x 00:08:04.855 [2024-07-21 11:31:34.241815] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:04.855 11:31:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.855 11:31:34 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:04.855 11:31:34 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:04.855 11:31:34 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:04.855 11:31:34 -- common/autotest_common.sh@1359 -- # local bs 00:08:04.855 11:31:34 -- common/autotest_common.sh@1360 -- # local nb 00:08:04.855 11:31:34 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:04.855 11:31:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.855 11:31:34 -- common/autotest_common.sh@10 -- # set +x 00:08:04.855 11:31:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.855 11:31:34 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:04.855 { 00:08:04.855 "name": "Malloc1", 00:08:04.855 "aliases": [ 00:08:04.855 "1b54097c-3fe9-4086-9328-f1d4988912b4" 00:08:04.855 ], 00:08:04.855 "product_name": "Malloc disk", 00:08:04.855 "block_size": 512, 00:08:04.855 "num_blocks": 1048576, 00:08:04.855 "uuid": "1b54097c-3fe9-4086-9328-f1d4988912b4", 00:08:04.855 "assigned_rate_limits": { 00:08:04.855 "rw_ios_per_sec": 0, 00:08:04.855 "rw_mbytes_per_sec": 0, 00:08:04.855 "r_mbytes_per_sec": 0, 00:08:04.855 "w_mbytes_per_sec": 0 00:08:04.855 }, 00:08:04.855 "claimed": true, 00:08:04.855 "claim_type": "exclusive_write", 00:08:04.855 "zoned": false, 00:08:04.855 "supported_io_types": { 00:08:04.855 "read": true, 00:08:04.855 "write": true, 00:08:04.855 "unmap": true, 00:08:04.855 "write_zeroes": true, 00:08:04.855 "flush": true, 00:08:04.855 "reset": true, 00:08:04.855 "compare": false, 00:08:04.855 "compare_and_write": false, 00:08:04.855 "abort": true, 00:08:04.855 "nvme_admin": false, 00:08:04.855 "nvme_io": false 00:08:04.855 }, 00:08:04.855 "memory_domains": [ 00:08:04.855 { 00:08:04.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.855 "dma_device_type": 2 00:08:04.855 } 00:08:04.855 ], 00:08:04.855 "driver_specific": {} 00:08:04.855 } 00:08:04.855 ]' 00:08:04.855 11:31:34 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:05.112 11:31:34 -- common/autotest_common.sh@1362 -- # bs=512 00:08:05.112 11:31:34 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:05.112 11:31:34 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:05.112 11:31:34 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:05.112 11:31:34 -- common/autotest_common.sh@1367 -- # echo 512 00:08:05.112 11:31:34 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:05.112 11:31:34 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:06.044 11:31:35 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:06.044 11:31:35 -- common/autotest_common.sh@1177 -- # local i=0 00:08:06.044 11:31:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:06.044 11:31:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:06.044 11:31:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:07.937 11:31:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:08.194 11:31:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:08.194 11:31:37 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:08.194 11:31:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:08.194 11:31:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:08.194 11:31:37 -- common/autotest_common.sh@1187 -- # return 0 00:08:08.194 11:31:37 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:08.194 11:31:37 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:08.194 11:31:37 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:08.194 11:31:37 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:08.194 11:31:37 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:08.194 11:31:37 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:08.194 11:31:37 -- setup/common.sh@80 -- # echo 536870912 00:08:08.194 11:31:37 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:08.194 11:31:37 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:08.194 11:31:37 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:08.194 11:31:37 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:08.194 11:31:37 -- target/filesystem.sh@69 -- # partprobe 00:08:08.450 11:31:37 -- target/filesystem.sh@70 -- # sleep 1 00:08:09.381 11:31:38 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:09.381 11:31:38 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:09.381 11:31:38 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:09.381 11:31:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.381 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:09.381 ************************************ 00:08:09.381 START TEST filesystem_ext4 00:08:09.381 ************************************ 00:08:09.381 11:31:38 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:09.381 11:31:38 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:09.381 11:31:38 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.381 11:31:38 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:09.381 11:31:38 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:09.381 11:31:38 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:09.381 11:31:38 -- common/autotest_common.sh@904 -- # local i=0 00:08:09.381 11:31:38 -- common/autotest_common.sh@905 -- # local force 00:08:09.381 11:31:38 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:09.381 11:31:38 -- common/autotest_common.sh@908 -- # force=-F 00:08:09.381 11:31:38 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:09.381 mke2fs 1.46.5 (30-Dec-2021) 00:08:09.381 Discarding device blocks: 0/522240 done 00:08:09.381 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:09.381 Filesystem UUID: 893118c8-bee1-46a4-8c38-51a04829eaba 00:08:09.381 Superblock backups stored on blocks: 00:08:09.381 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:09.381 00:08:09.381 Allocating group tables: 0/64 done 00:08:09.381 Writing inode tables: 0/64 done 00:08:09.638 Creating journal (8192 blocks): done 00:08:09.638 Writing superblocks and filesystem accounting information: 0/64 done 00:08:09.638 00:08:09.638 11:31:38 -- common/autotest_common.sh@921 -- # return 0 00:08:09.638 11:31:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.638 11:31:38 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.638 11:31:38 -- target/filesystem.sh@25 -- # sync 00:08:09.638 11:31:38 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.638 11:31:38 -- target/filesystem.sh@27 -- # sync 00:08:09.638 11:31:38 -- target/filesystem.sh@29 -- # i=0 00:08:09.638 11:31:38 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.638 11:31:38 -- target/filesystem.sh@37 -- # kill -0 2206054 00:08:09.638 11:31:38 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.638 11:31:38 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.638 11:31:38 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.638 11:31:38 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.638 00:08:09.638 real 0m0.191s 00:08:09.638 user 0m0.032s 00:08:09.638 sys 0m0.074s 00:08:09.638 11:31:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.638 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:09.638 ************************************ 00:08:09.638 END TEST filesystem_ext4 00:08:09.638 ************************************ 00:08:09.638 11:31:38 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:09.638 11:31:38 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:09.638 11:31:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.638 11:31:38 -- common/autotest_common.sh@10 -- # set +x 00:08:09.638 ************************************ 00:08:09.638 START TEST filesystem_btrfs 00:08:09.638 ************************************ 00:08:09.638 11:31:38 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:09.638 11:31:38 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:09.638 11:31:38 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.638 11:31:38 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:09.638 11:31:38 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:09.638 11:31:38 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:09.638 11:31:38 -- common/autotest_common.sh@904 -- # local i=0 00:08:09.638 11:31:38 -- common/autotest_common.sh@905 -- # local force 00:08:09.638 11:31:38 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:09.638 11:31:38 -- common/autotest_common.sh@910 -- # force=-f 00:08:09.638 11:31:38 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:09.896 btrfs-progs v6.6.2 00:08:09.896 See https://btrfs.readthedocs.io for more information. 00:08:09.896 00:08:09.896 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:09.896 NOTE: several default settings have changed in version 5.15, please make sure 00:08:09.896 this does not affect your deployments: 00:08:09.896 - DUP for metadata (-m dup) 00:08:09.896 - enabled no-holes (-O no-holes) 00:08:09.896 - enabled free-space-tree (-R free-space-tree) 00:08:09.896 00:08:09.896 Label: (null) 00:08:09.896 UUID: 5acd3e7d-6d77-4b48-a6a0-b4c4c63a7e86 00:08:09.896 Node size: 16384 00:08:09.896 Sector size: 4096 00:08:09.896 Filesystem size: 510.00MiB 00:08:09.896 Block group profiles: 00:08:09.896 Data: single 8.00MiB 00:08:09.896 Metadata: DUP 32.00MiB 00:08:09.896 System: DUP 8.00MiB 00:08:09.896 SSD detected: yes 00:08:09.896 Zoned device: no 00:08:09.896 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:09.896 Runtime features: free-space-tree 00:08:09.896 Checksum: crc32c 00:08:09.896 Number of devices: 1 00:08:09.896 Devices: 00:08:09.896 ID SIZE PATH 00:08:09.896 1 510.00MiB /dev/nvme0n1p1 00:08:09.896 00:08:09.896 11:31:39 -- common/autotest_common.sh@921 -- # return 0 00:08:09.896 11:31:39 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.896 11:31:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.896 11:31:39 -- target/filesystem.sh@25 -- # sync 00:08:09.896 11:31:39 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.896 11:31:39 -- target/filesystem.sh@27 -- # sync 00:08:09.896 11:31:39 -- target/filesystem.sh@29 -- # i=0 00:08:09.896 11:31:39 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.896 11:31:39 -- target/filesystem.sh@37 -- # kill -0 2206054 00:08:09.896 11:31:39 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.896 11:31:39 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.896 11:31:39 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.896 11:31:39 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.896 00:08:09.896 real 0m0.270s 00:08:09.896 user 0m0.048s 00:08:09.896 sys 0m0.127s 00:08:09.896 11:31:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.896 11:31:39 -- common/autotest_common.sh@10 -- # set +x 00:08:09.896 ************************************ 00:08:09.896 END TEST filesystem_btrfs 00:08:09.896 ************************************ 00:08:09.896 11:31:39 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:09.896 11:31:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:09.896 11:31:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.896 11:31:39 -- common/autotest_common.sh@10 -- # set +x 00:08:09.896 ************************************ 00:08:09.896 START TEST filesystem_xfs 00:08:09.896 ************************************ 00:08:09.896 11:31:39 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:09.896 11:31:39 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:09.896 11:31:39 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.896 11:31:39 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:09.896 11:31:39 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:09.896 11:31:39 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:09.896 11:31:39 -- common/autotest_common.sh@904 -- # local i=0 00:08:09.896 11:31:39 -- common/autotest_common.sh@905 -- # local force 00:08:09.896 11:31:39 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:09.896 11:31:39 -- common/autotest_common.sh@910 -- # force=-f 00:08:09.896 11:31:39 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:10.155 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:10.155 = sectsz=512 attr=2, projid32bit=1 00:08:10.155 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:10.155 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:10.155 data = bsize=4096 blocks=130560, imaxpct=25 00:08:10.155 = sunit=0 swidth=0 blks 00:08:10.155 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:10.155 log =internal log bsize=4096 blocks=16384, version=2 00:08:10.155 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:10.155 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:10.155 Discarding blocks...Done. 00:08:10.155 11:31:39 -- common/autotest_common.sh@921 -- # return 0 00:08:10.155 11:31:39 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:10.155 11:31:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:10.155 11:31:39 -- target/filesystem.sh@25 -- # sync 00:08:10.155 11:31:39 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:10.155 11:31:39 -- target/filesystem.sh@27 -- # sync 00:08:10.155 11:31:39 -- target/filesystem.sh@29 -- # i=0 00:08:10.155 11:31:39 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:10.155 11:31:39 -- target/filesystem.sh@37 -- # kill -0 2206054 00:08:10.155 11:31:39 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:10.155 11:31:39 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:10.155 11:31:39 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:10.155 11:31:39 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:10.155 00:08:10.155 real 0m0.211s 00:08:10.155 user 0m0.041s 00:08:10.155 sys 0m0.073s 00:08:10.155 11:31:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.155 11:31:39 -- common/autotest_common.sh@10 -- # set +x 00:08:10.155 ************************************ 00:08:10.155 END TEST filesystem_xfs 00:08:10.155 ************************************ 00:08:10.155 11:31:39 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:10.155 11:31:39 -- target/filesystem.sh@93 -- # sync 00:08:10.155 11:31:39 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:11.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.523 11:31:40 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:11.523 11:31:40 -- common/autotest_common.sh@1198 -- # local i=0 00:08:11.523 11:31:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:11.523 11:31:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:11.523 11:31:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:11.523 11:31:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:11.523 11:31:40 -- common/autotest_common.sh@1210 -- # return 0 00:08:11.523 11:31:40 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:11.523 11:31:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.523 11:31:40 -- common/autotest_common.sh@10 -- # set +x 00:08:11.523 11:31:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.523 11:31:40 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:11.523 11:31:40 -- target/filesystem.sh@101 -- # killprocess 2206054 00:08:11.523 11:31:40 -- common/autotest_common.sh@926 -- # '[' -z 2206054 ']' 00:08:11.523 11:31:40 -- common/autotest_common.sh@930 -- # kill -0 2206054 00:08:11.523 11:31:40 -- common/autotest_common.sh@931 -- # uname 00:08:11.523 11:31:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:11.523 11:31:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2206054 00:08:11.523 11:31:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:11.523 11:31:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:11.523 11:31:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2206054' 00:08:11.523 killing process with pid 2206054 00:08:11.523 11:31:40 -- common/autotest_common.sh@945 -- # kill 2206054 00:08:11.523 11:31:40 -- common/autotest_common.sh@950 -- # wait 2206054 00:08:11.779 11:31:40 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:11.779 00:08:11.779 real 0m7.938s 00:08:11.779 user 0m31.010s 00:08:11.779 sys 0m1.189s 00:08:11.779 11:31:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.779 11:31:40 -- common/autotest_common.sh@10 -- # set +x 00:08:11.779 ************************************ 00:08:11.779 END TEST nvmf_filesystem_no_in_capsule 00:08:11.779 ************************************ 00:08:11.779 11:31:41 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:11.779 11:31:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:11.779 11:31:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.779 11:31:41 -- common/autotest_common.sh@10 -- # set +x 00:08:11.779 ************************************ 00:08:11.779 START TEST nvmf_filesystem_in_capsule 00:08:11.779 ************************************ 00:08:11.779 11:31:41 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:11.779 11:31:41 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:11.779 11:31:41 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:11.779 11:31:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:11.779 11:31:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:11.779 11:31:41 -- common/autotest_common.sh@10 -- # set +x 00:08:11.779 11:31:41 -- nvmf/common.sh@469 -- # nvmfpid=2207618 00:08:11.779 11:31:41 -- nvmf/common.sh@470 -- # waitforlisten 2207618 00:08:11.779 11:31:41 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:11.779 11:31:41 -- common/autotest_common.sh@819 -- # '[' -z 2207618 ']' 00:08:11.779 11:31:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.779 11:31:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:11.779 11:31:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.779 11:31:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:11.779 11:31:41 -- common/autotest_common.sh@10 -- # set +x 00:08:11.779 [2024-07-21 11:31:41.105480] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:11.779 [2024-07-21 11:31:41.105541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.779 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.779 [2024-07-21 11:31:41.188841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.035 [2024-07-21 11:31:41.223925] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:12.035 [2024-07-21 11:31:41.224053] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.035 [2024-07-21 11:31:41.224063] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.035 [2024-07-21 11:31:41.224072] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.035 [2024-07-21 11:31:41.224126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.035 [2024-07-21 11:31:41.224210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.035 [2024-07-21 11:31:41.224279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.035 [2024-07-21 11:31:41.224281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.598 11:31:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:12.598 11:31:41 -- common/autotest_common.sh@852 -- # return 0 00:08:12.598 11:31:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:12.598 11:31:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:12.598 11:31:41 -- common/autotest_common.sh@10 -- # set +x 00:08:12.598 11:31:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.598 11:31:41 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:12.598 11:31:41 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:12.598 11:31:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.598 11:31:41 -- common/autotest_common.sh@10 -- # set +x 00:08:12.598 [2024-07-21 11:31:41.977023] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x143f4b0/0x14439a0) succeed. 00:08:12.598 [2024-07-21 11:31:41.987306] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1440aa0/0x1485030) succeed. 00:08:12.854 11:31:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.854 11:31:42 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:12.854 11:31:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.854 11:31:42 -- common/autotest_common.sh@10 -- # set +x 00:08:12.854 Malloc1 00:08:12.854 11:31:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.854 11:31:42 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:12.854 11:31:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.854 11:31:42 -- common/autotest_common.sh@10 -- # set +x 00:08:12.854 11:31:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.854 11:31:42 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:12.854 11:31:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.854 11:31:42 -- common/autotest_common.sh@10 -- # set +x 00:08:12.854 11:31:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.854 11:31:42 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:12.854 11:31:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.854 11:31:42 -- common/autotest_common.sh@10 -- # set +x 00:08:12.854 [2024-07-21 11:31:42.248352] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:12.854 11:31:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.854 11:31:42 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:12.854 11:31:42 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:12.854 11:31:42 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:12.854 11:31:42 -- common/autotest_common.sh@1359 -- # local bs 00:08:12.854 11:31:42 -- common/autotest_common.sh@1360 -- # local nb 00:08:12.854 11:31:42 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:12.854 11:31:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.854 11:31:42 -- common/autotest_common.sh@10 -- # set +x 00:08:13.110 11:31:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.110 11:31:42 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:13.110 { 00:08:13.110 "name": "Malloc1", 00:08:13.110 "aliases": [ 00:08:13.110 "bf362095-7041-42c4-990a-442d902baf6d" 00:08:13.110 ], 00:08:13.110 "product_name": "Malloc disk", 00:08:13.110 "block_size": 512, 00:08:13.110 "num_blocks": 1048576, 00:08:13.110 "uuid": "bf362095-7041-42c4-990a-442d902baf6d", 00:08:13.110 "assigned_rate_limits": { 00:08:13.110 "rw_ios_per_sec": 0, 00:08:13.110 "rw_mbytes_per_sec": 0, 00:08:13.110 "r_mbytes_per_sec": 0, 00:08:13.110 "w_mbytes_per_sec": 0 00:08:13.110 }, 00:08:13.110 "claimed": true, 00:08:13.110 "claim_type": "exclusive_write", 00:08:13.110 "zoned": false, 00:08:13.110 "supported_io_types": { 00:08:13.110 "read": true, 00:08:13.110 "write": true, 00:08:13.110 "unmap": true, 00:08:13.110 "write_zeroes": true, 00:08:13.110 "flush": true, 00:08:13.110 "reset": true, 00:08:13.110 "compare": false, 00:08:13.110 "compare_and_write": false, 00:08:13.110 "abort": true, 00:08:13.110 "nvme_admin": false, 00:08:13.110 "nvme_io": false 00:08:13.110 }, 00:08:13.110 "memory_domains": [ 00:08:13.110 { 00:08:13.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.110 "dma_device_type": 2 00:08:13.110 } 00:08:13.110 ], 00:08:13.110 "driver_specific": {} 00:08:13.110 } 00:08:13.110 ]' 00:08:13.110 11:31:42 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:13.110 11:31:42 -- common/autotest_common.sh@1362 -- # bs=512 00:08:13.110 11:31:42 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:13.110 11:31:42 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:13.110 11:31:42 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:13.110 11:31:42 -- common/autotest_common.sh@1367 -- # echo 512 00:08:13.110 11:31:42 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:13.110 11:31:42 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:14.037 11:31:43 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:14.037 11:31:43 -- common/autotest_common.sh@1177 -- # local i=0 00:08:14.037 11:31:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:14.037 11:31:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:14.037 11:31:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:15.978 11:31:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:15.978 11:31:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:15.978 11:31:45 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:15.978 11:31:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:15.978 11:31:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:15.978 11:31:45 -- common/autotest_common.sh@1187 -- # return 0 00:08:15.978 11:31:45 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:15.978 11:31:45 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:15.978 11:31:45 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:15.978 11:31:45 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:15.978 11:31:45 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:15.978 11:31:45 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:15.978 11:31:45 -- setup/common.sh@80 -- # echo 536870912 00:08:15.978 11:31:45 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:15.978 11:31:45 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:15.978 11:31:45 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:15.978 11:31:45 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:16.235 11:31:45 -- target/filesystem.sh@69 -- # partprobe 00:08:16.491 11:31:45 -- target/filesystem.sh@70 -- # sleep 1 00:08:17.422 11:31:46 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:17.422 11:31:46 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:17.422 11:31:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:17.422 11:31:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.422 11:31:46 -- common/autotest_common.sh@10 -- # set +x 00:08:17.422 ************************************ 00:08:17.422 START TEST filesystem_in_capsule_ext4 00:08:17.422 ************************************ 00:08:17.422 11:31:46 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:17.422 11:31:46 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:17.422 11:31:46 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:17.422 11:31:46 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:17.422 11:31:46 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:17.422 11:31:46 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:17.422 11:31:46 -- common/autotest_common.sh@904 -- # local i=0 00:08:17.422 11:31:46 -- common/autotest_common.sh@905 -- # local force 00:08:17.422 11:31:46 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:17.422 11:31:46 -- common/autotest_common.sh@908 -- # force=-F 00:08:17.422 11:31:46 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:17.422 mke2fs 1.46.5 (30-Dec-2021) 00:08:17.422 Discarding device blocks: 0/522240 done 00:08:17.422 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:17.422 Filesystem UUID: 207e71af-23c7-4eb1-a6cd-e4ce5e984c2c 00:08:17.422 Superblock backups stored on blocks: 00:08:17.422 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:17.422 00:08:17.422 Allocating group tables: 0/64 done 00:08:17.422 Writing inode tables: 0/64 done 00:08:17.422 Creating journal (8192 blocks): done 00:08:17.422 Writing superblocks and filesystem accounting information: 0/64 done 00:08:17.422 00:08:17.422 11:31:46 -- common/autotest_common.sh@921 -- # return 0 00:08:17.422 11:31:46 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:17.422 11:31:46 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:17.422 11:31:46 -- target/filesystem.sh@25 -- # sync 00:08:17.422 11:31:46 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:17.422 11:31:46 -- target/filesystem.sh@27 -- # sync 00:08:17.422 11:31:46 -- target/filesystem.sh@29 -- # i=0 00:08:17.422 11:31:46 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:17.679 11:31:46 -- target/filesystem.sh@37 -- # kill -0 2207618 00:08:17.679 11:31:46 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:17.679 11:31:46 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:17.679 11:31:46 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:17.679 11:31:46 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:17.679 00:08:17.679 real 0m0.190s 00:08:17.679 user 0m0.023s 00:08:17.679 sys 0m0.084s 00:08:17.679 11:31:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.679 11:31:46 -- common/autotest_common.sh@10 -- # set +x 00:08:17.679 ************************************ 00:08:17.679 END TEST filesystem_in_capsule_ext4 00:08:17.679 ************************************ 00:08:17.679 11:31:46 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:17.679 11:31:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:17.679 11:31:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.679 11:31:46 -- common/autotest_common.sh@10 -- # set +x 00:08:17.679 ************************************ 00:08:17.679 START TEST filesystem_in_capsule_btrfs 00:08:17.679 ************************************ 00:08:17.679 11:31:46 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:17.679 11:31:46 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:17.679 11:31:46 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:17.679 11:31:46 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:17.679 11:31:46 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:17.679 11:31:46 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:17.679 11:31:46 -- common/autotest_common.sh@904 -- # local i=0 00:08:17.679 11:31:46 -- common/autotest_common.sh@905 -- # local force 00:08:17.679 11:31:46 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:17.679 11:31:46 -- common/autotest_common.sh@910 -- # force=-f 00:08:17.679 11:31:46 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:17.679 btrfs-progs v6.6.2 00:08:17.679 See https://btrfs.readthedocs.io for more information. 00:08:17.679 00:08:17.679 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:17.679 NOTE: several default settings have changed in version 5.15, please make sure 00:08:17.679 this does not affect your deployments: 00:08:17.679 - DUP for metadata (-m dup) 00:08:17.679 - enabled no-holes (-O no-holes) 00:08:17.679 - enabled free-space-tree (-R free-space-tree) 00:08:17.679 00:08:17.679 Label: (null) 00:08:17.679 UUID: c1cac01e-e209-4731-a21e-aae5e9d1db33 00:08:17.679 Node size: 16384 00:08:17.679 Sector size: 4096 00:08:17.679 Filesystem size: 510.00MiB 00:08:17.679 Block group profiles: 00:08:17.679 Data: single 8.00MiB 00:08:17.679 Metadata: DUP 32.00MiB 00:08:17.679 System: DUP 8.00MiB 00:08:17.679 SSD detected: yes 00:08:17.679 Zoned device: no 00:08:17.679 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:17.679 Runtime features: free-space-tree 00:08:17.679 Checksum: crc32c 00:08:17.679 Number of devices: 1 00:08:17.679 Devices: 00:08:17.679 ID SIZE PATH 00:08:17.679 1 510.00MiB /dev/nvme0n1p1 00:08:17.679 00:08:17.679 11:31:47 -- common/autotest_common.sh@921 -- # return 0 00:08:17.679 11:31:47 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:17.937 11:31:47 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:17.937 11:31:47 -- target/filesystem.sh@25 -- # sync 00:08:17.937 11:31:47 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:17.937 11:31:47 -- target/filesystem.sh@27 -- # sync 00:08:17.937 11:31:47 -- target/filesystem.sh@29 -- # i=0 00:08:17.937 11:31:47 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:17.937 11:31:47 -- target/filesystem.sh@37 -- # kill -0 2207618 00:08:17.937 11:31:47 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:17.937 11:31:47 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:17.937 11:31:47 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:17.937 11:31:47 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:17.937 00:08:17.937 real 0m0.267s 00:08:17.937 user 0m0.036s 00:08:17.937 sys 0m0.137s 00:08:17.937 11:31:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.937 11:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.937 ************************************ 00:08:17.937 END TEST filesystem_in_capsule_btrfs 00:08:17.937 ************************************ 00:08:17.937 11:31:47 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:17.937 11:31:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:17.937 11:31:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.937 11:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.937 ************************************ 00:08:17.937 START TEST filesystem_in_capsule_xfs 00:08:17.937 ************************************ 00:08:17.937 11:31:47 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:17.937 11:31:47 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:17.937 11:31:47 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:17.937 11:31:47 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:17.937 11:31:47 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:17.937 11:31:47 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:17.937 11:31:47 -- common/autotest_common.sh@904 -- # local i=0 00:08:17.937 11:31:47 -- common/autotest_common.sh@905 -- # local force 00:08:17.937 11:31:47 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:17.937 11:31:47 -- common/autotest_common.sh@910 -- # force=-f 00:08:17.937 11:31:47 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:17.937 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:17.937 = sectsz=512 attr=2, projid32bit=1 00:08:17.937 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:17.937 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:17.937 data = bsize=4096 blocks=130560, imaxpct=25 00:08:17.937 = sunit=0 swidth=0 blks 00:08:17.937 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:17.937 log =internal log bsize=4096 blocks=16384, version=2 00:08:17.937 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:17.937 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:17.937 Discarding blocks...Done. 00:08:17.937 11:31:47 -- common/autotest_common.sh@921 -- # return 0 00:08:17.937 11:31:47 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:18.194 11:31:47 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:18.194 11:31:47 -- target/filesystem.sh@25 -- # sync 00:08:18.194 11:31:47 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:18.194 11:31:47 -- target/filesystem.sh@27 -- # sync 00:08:18.194 11:31:47 -- target/filesystem.sh@29 -- # i=0 00:08:18.194 11:31:47 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:18.194 11:31:47 -- target/filesystem.sh@37 -- # kill -0 2207618 00:08:18.194 11:31:47 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:18.194 11:31:47 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:18.194 11:31:47 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:18.194 11:31:47 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:18.194 00:08:18.194 real 0m0.206s 00:08:18.194 user 0m0.026s 00:08:18.194 sys 0m0.083s 00:08:18.194 11:31:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.194 11:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:18.194 ************************************ 00:08:18.194 END TEST filesystem_in_capsule_xfs 00:08:18.194 ************************************ 00:08:18.194 11:31:47 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:18.194 11:31:47 -- target/filesystem.sh@93 -- # sync 00:08:18.194 11:31:47 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:19.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.124 11:31:48 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:19.124 11:31:48 -- common/autotest_common.sh@1198 -- # local i=0 00:08:19.124 11:31:48 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:19.124 11:31:48 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.124 11:31:48 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:19.124 11:31:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.124 11:31:48 -- common/autotest_common.sh@1210 -- # return 0 00:08:19.124 11:31:48 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.124 11:31:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.124 11:31:48 -- common/autotest_common.sh@10 -- # set +x 00:08:19.124 11:31:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.124 11:31:48 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:19.124 11:31:48 -- target/filesystem.sh@101 -- # killprocess 2207618 00:08:19.124 11:31:48 -- common/autotest_common.sh@926 -- # '[' -z 2207618 ']' 00:08:19.124 11:31:48 -- common/autotest_common.sh@930 -- # kill -0 2207618 00:08:19.124 11:31:48 -- common/autotest_common.sh@931 -- # uname 00:08:19.124 11:31:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:19.124 11:31:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2207618 00:08:19.383 11:31:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:19.383 11:31:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:19.383 11:31:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2207618' 00:08:19.383 killing process with pid 2207618 00:08:19.383 11:31:48 -- common/autotest_common.sh@945 -- # kill 2207618 00:08:19.383 11:31:48 -- common/autotest_common.sh@950 -- # wait 2207618 00:08:19.653 11:31:48 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:19.653 00:08:19.653 real 0m7.923s 00:08:19.653 user 0m30.889s 00:08:19.653 sys 0m1.229s 00:08:19.653 11:31:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.653 11:31:48 -- common/autotest_common.sh@10 -- # set +x 00:08:19.653 ************************************ 00:08:19.653 END TEST nvmf_filesystem_in_capsule 00:08:19.653 ************************************ 00:08:19.653 11:31:49 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:19.653 11:31:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:19.653 11:31:49 -- nvmf/common.sh@116 -- # sync 00:08:19.653 11:31:49 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:19.653 11:31:49 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:19.653 11:31:49 -- nvmf/common.sh@119 -- # set +e 00:08:19.653 11:31:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:19.653 11:31:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:19.653 rmmod nvme_rdma 00:08:19.653 rmmod nvme_fabrics 00:08:19.653 11:31:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:19.653 11:31:49 -- nvmf/common.sh@123 -- # set -e 00:08:19.653 11:31:49 -- nvmf/common.sh@124 -- # return 0 00:08:19.653 11:31:49 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:19.653 11:31:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:19.653 11:31:49 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:19.653 00:08:19.653 real 0m24.746s 00:08:19.653 user 1m4.438s 00:08:19.653 sys 0m9.044s 00:08:19.653 11:31:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.653 11:31:49 -- common/autotest_common.sh@10 -- # set +x 00:08:19.653 ************************************ 00:08:19.653 END TEST nvmf_filesystem 00:08:19.653 ************************************ 00:08:19.922 11:31:49 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:19.922 11:31:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:19.922 11:31:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.922 11:31:49 -- common/autotest_common.sh@10 -- # set +x 00:08:19.922 ************************************ 00:08:19.922 START TEST nvmf_discovery 00:08:19.922 ************************************ 00:08:19.922 11:31:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:19.922 * Looking for test storage... 00:08:19.922 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:19.922 11:31:49 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.922 11:31:49 -- nvmf/common.sh@7 -- # uname -s 00:08:19.922 11:31:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.922 11:31:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.922 11:31:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.922 11:31:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.922 11:31:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.922 11:31:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.922 11:31:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.922 11:31:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.922 11:31:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.922 11:31:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.922 11:31:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:19.922 11:31:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:19.922 11:31:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.922 11:31:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.922 11:31:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.922 11:31:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:19.922 11:31:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.922 11:31:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.922 11:31:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.922 11:31:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.922 11:31:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.922 11:31:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.922 11:31:49 -- paths/export.sh@5 -- # export PATH 00:08:19.922 11:31:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.922 11:31:49 -- nvmf/common.sh@46 -- # : 0 00:08:19.922 11:31:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:19.922 11:31:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:19.922 11:31:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:19.922 11:31:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.922 11:31:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.922 11:31:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:19.922 11:31:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:19.922 11:31:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:19.922 11:31:49 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:19.922 11:31:49 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:19.922 11:31:49 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:19.922 11:31:49 -- target/discovery.sh@15 -- # hash nvme 00:08:19.922 11:31:49 -- target/discovery.sh@20 -- # nvmftestinit 00:08:19.922 11:31:49 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:19.922 11:31:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.922 11:31:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:19.922 11:31:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:19.922 11:31:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:19.922 11:31:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.922 11:31:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.922 11:31:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.922 11:31:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:19.922 11:31:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:19.922 11:31:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:19.922 11:31:49 -- common/autotest_common.sh@10 -- # set +x 00:08:28.021 11:31:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:28.021 11:31:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:28.021 11:31:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:28.021 11:31:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:28.021 11:31:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:28.021 11:31:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:28.021 11:31:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:28.021 11:31:57 -- nvmf/common.sh@294 -- # net_devs=() 00:08:28.021 11:31:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:28.021 11:31:57 -- nvmf/common.sh@295 -- # e810=() 00:08:28.021 11:31:57 -- nvmf/common.sh@295 -- # local -ga e810 00:08:28.021 11:31:57 -- nvmf/common.sh@296 -- # x722=() 00:08:28.021 11:31:57 -- nvmf/common.sh@296 -- # local -ga x722 00:08:28.021 11:31:57 -- nvmf/common.sh@297 -- # mlx=() 00:08:28.021 11:31:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:28.021 11:31:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.021 11:31:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.021 11:31:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.021 11:31:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.021 11:31:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.021 11:31:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.021 11:31:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.021 11:31:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.021 11:31:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.021 11:31:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.021 11:31:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.021 11:31:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:28.021 11:31:57 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:28.021 11:31:57 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:28.021 11:31:57 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:28.021 11:31:57 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:28.021 11:31:57 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:28.021 11:31:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:28.021 11:31:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:28.021 11:31:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:28.021 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:28.021 11:31:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:28.021 11:31:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:28.021 11:31:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:28.021 11:31:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:28.021 11:31:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:28.021 11:31:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:28.021 11:31:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:28.021 11:31:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:28.021 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:28.021 11:31:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:28.021 11:31:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:28.021 11:31:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:28.021 11:31:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:28.021 11:31:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:28.021 11:31:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:28.021 11:31:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:28.021 11:31:57 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:28.021 11:31:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:28.021 11:31:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.021 11:31:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:28.021 11:31:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.021 11:31:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:28.021 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:28.021 11:31:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.021 11:31:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:28.021 11:31:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.021 11:31:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:28.021 11:31:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.021 11:31:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:28.021 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:28.021 11:31:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.021 11:31:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:28.021 11:31:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:28.021 11:31:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:28.021 11:31:57 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:28.021 11:31:57 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:28.021 11:31:57 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:28.021 11:31:57 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:28.021 11:31:57 -- nvmf/common.sh@57 -- # uname 00:08:28.021 11:31:57 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:28.021 11:31:57 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:28.021 11:31:57 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:28.021 11:31:57 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:28.021 11:31:57 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:28.021 11:31:57 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:28.021 11:31:57 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:28.022 11:31:57 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:28.022 11:31:57 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:28.022 11:31:57 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:28.022 11:31:57 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:28.022 11:31:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:28.022 11:31:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:28.022 11:31:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:28.022 11:31:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:28.022 11:31:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:28.022 11:31:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:28.022 11:31:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.022 11:31:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:28.022 11:31:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:28.022 11:31:57 -- nvmf/common.sh@104 -- # continue 2 00:08:28.022 11:31:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:28.022 11:31:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.022 11:31:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:28.022 11:31:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.022 11:31:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:28.022 11:31:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:28.022 11:31:57 -- nvmf/common.sh@104 -- # continue 2 00:08:28.022 11:31:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:28.022 11:31:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:28.022 11:31:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:28.022 11:31:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:28.022 11:31:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:28.022 11:31:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:28.022 11:31:57 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:28.022 11:31:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:28.022 11:31:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:28.022 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:28.022 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:28.022 altname enp217s0f0np0 00:08:28.022 altname ens818f0np0 00:08:28.022 inet 192.168.100.8/24 scope global mlx_0_0 00:08:28.022 valid_lft forever preferred_lft forever 00:08:28.022 11:31:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:28.022 11:31:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:28.022 11:31:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:28.022 11:31:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:28.022 11:31:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:28.022 11:31:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:28.022 11:31:57 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:28.022 11:31:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:28.022 11:31:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:28.022 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:28.022 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:28.022 altname enp217s0f1np1 00:08:28.022 altname ens818f1np1 00:08:28.022 inet 192.168.100.9/24 scope global mlx_0_1 00:08:28.022 valid_lft forever preferred_lft forever 00:08:28.022 11:31:57 -- nvmf/common.sh@410 -- # return 0 00:08:28.022 11:31:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:28.022 11:31:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:28.022 11:31:57 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:28.022 11:31:57 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:28.022 11:31:57 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:28.022 11:31:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:28.022 11:31:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:28.022 11:31:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:28.022 11:31:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:28.022 11:31:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:28.022 11:31:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:28.022 11:31:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.022 11:31:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:28.022 11:31:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:28.022 11:31:57 -- nvmf/common.sh@104 -- # continue 2 00:08:28.022 11:31:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:28.022 11:31:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.022 11:31:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:28.022 11:31:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.022 11:31:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:28.022 11:31:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:28.022 11:31:57 -- nvmf/common.sh@104 -- # continue 2 00:08:28.022 11:31:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:28.022 11:31:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:28.022 11:31:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:28.022 11:31:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:28.022 11:31:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:28.022 11:31:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:28.022 11:31:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:28.022 11:31:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:28.022 11:31:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:28.022 11:31:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:28.022 11:31:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:28.022 11:31:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:28.022 11:31:57 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:28.022 192.168.100.9' 00:08:28.022 11:31:57 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:28.022 192.168.100.9' 00:08:28.022 11:31:57 -- nvmf/common.sh@445 -- # head -n 1 00:08:28.022 11:31:57 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:28.022 11:31:57 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:28.022 192.168.100.9' 00:08:28.022 11:31:57 -- nvmf/common.sh@446 -- # tail -n +2 00:08:28.022 11:31:57 -- nvmf/common.sh@446 -- # head -n 1 00:08:28.022 11:31:57 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:28.022 11:31:57 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:28.022 11:31:57 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:28.022 11:31:57 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:28.022 11:31:57 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:28.022 11:31:57 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:28.279 11:31:57 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:28.279 11:31:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:28.279 11:31:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:28.279 11:31:57 -- common/autotest_common.sh@10 -- # set +x 00:08:28.279 11:31:57 -- nvmf/common.sh@469 -- # nvmfpid=2213122 00:08:28.279 11:31:57 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:28.279 11:31:57 -- nvmf/common.sh@470 -- # waitforlisten 2213122 00:08:28.279 11:31:57 -- common/autotest_common.sh@819 -- # '[' -z 2213122 ']' 00:08:28.279 11:31:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.279 11:31:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:28.279 11:31:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.279 11:31:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:28.279 11:31:57 -- common/autotest_common.sh@10 -- # set +x 00:08:28.279 [2024-07-21 11:31:57.498555] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:28.279 [2024-07-21 11:31:57.498607] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.279 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.279 [2024-07-21 11:31:57.584186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.279 [2024-07-21 11:31:57.622335] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:28.279 [2024-07-21 11:31:57.622464] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.279 [2024-07-21 11:31:57.622474] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.279 [2024-07-21 11:31:57.622483] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.279 [2024-07-21 11:31:57.622534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.279 [2024-07-21 11:31:57.622636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.279 [2024-07-21 11:31:57.622689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.279 [2024-07-21 11:31:57.622691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.871 11:31:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:28.871 11:31:58 -- common/autotest_common.sh@852 -- # return 0 00:08:28.871 11:31:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:28.871 11:31:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:28.871 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.128 11:31:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.128 11:31:58 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:29.129 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.129 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.129 [2024-07-21 11:31:58.362881] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7384b0/0x73c9a0) succeed. 00:08:29.129 [2024-07-21 11:31:58.373329] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x739aa0/0x77e030) succeed. 00:08:29.129 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.129 11:31:58 -- target/discovery.sh@26 -- # seq 1 4 00:08:29.129 11:31:58 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:29.129 11:31:58 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:29.129 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.129 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.129 Null1 00:08:29.129 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.129 11:31:58 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:29.129 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.129 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.129 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.129 11:31:58 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:29.129 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.129 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.129 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.129 11:31:58 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:29.129 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.129 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.129 [2024-07-21 11:31:58.537995] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:29.129 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.129 11:31:58 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:29.129 11:31:58 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:29.129 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.129 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.129 Null2 00:08:29.129 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.385 11:31:58 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:29.385 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.385 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.385 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.385 11:31:58 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:29.385 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.385 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.385 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.385 11:31:58 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:29.385 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.385 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.385 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.385 11:31:58 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:29.385 11:31:58 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:29.385 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.385 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.385 Null3 00:08:29.385 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.385 11:31:58 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:29.385 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.385 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.385 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.385 11:31:58 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:29.385 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.385 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.385 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.385 11:31:58 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:29.385 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.385 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.385 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.385 11:31:58 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:29.385 11:31:58 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:29.385 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.385 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.385 Null4 00:08:29.385 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.385 11:31:58 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:29.385 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.385 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.385 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.385 11:31:58 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:29.385 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.385 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.385 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.385 11:31:58 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:29.385 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.385 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.385 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.385 11:31:58 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:29.385 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.385 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.385 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.385 11:31:58 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:29.385 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.385 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.385 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.385 11:31:58 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:08:29.385 00:08:29.385 Discovery Log Number of Records 6, Generation counter 6 00:08:29.385 =====Discovery Log Entry 0====== 00:08:29.385 trtype: rdma 00:08:29.385 adrfam: ipv4 00:08:29.385 subtype: current discovery subsystem 00:08:29.385 treq: not required 00:08:29.385 portid: 0 00:08:29.385 trsvcid: 4420 00:08:29.385 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:29.385 traddr: 192.168.100.8 00:08:29.385 eflags: explicit discovery connections, duplicate discovery information 00:08:29.385 rdma_prtype: not specified 00:08:29.385 rdma_qptype: connected 00:08:29.385 rdma_cms: rdma-cm 00:08:29.385 rdma_pkey: 0x0000 00:08:29.385 =====Discovery Log Entry 1====== 00:08:29.385 trtype: rdma 00:08:29.385 adrfam: ipv4 00:08:29.385 subtype: nvme subsystem 00:08:29.385 treq: not required 00:08:29.385 portid: 0 00:08:29.385 trsvcid: 4420 00:08:29.385 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:29.385 traddr: 192.168.100.8 00:08:29.385 eflags: none 00:08:29.385 rdma_prtype: not specified 00:08:29.385 rdma_qptype: connected 00:08:29.385 rdma_cms: rdma-cm 00:08:29.385 rdma_pkey: 0x0000 00:08:29.385 =====Discovery Log Entry 2====== 00:08:29.385 trtype: rdma 00:08:29.385 adrfam: ipv4 00:08:29.385 subtype: nvme subsystem 00:08:29.385 treq: not required 00:08:29.385 portid: 0 00:08:29.385 trsvcid: 4420 00:08:29.385 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:29.385 traddr: 192.168.100.8 00:08:29.385 eflags: none 00:08:29.385 rdma_prtype: not specified 00:08:29.385 rdma_qptype: connected 00:08:29.385 rdma_cms: rdma-cm 00:08:29.385 rdma_pkey: 0x0000 00:08:29.385 =====Discovery Log Entry 3====== 00:08:29.385 trtype: rdma 00:08:29.385 adrfam: ipv4 00:08:29.385 subtype: nvme subsystem 00:08:29.385 treq: not required 00:08:29.385 portid: 0 00:08:29.385 trsvcid: 4420 00:08:29.385 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:29.385 traddr: 192.168.100.8 00:08:29.385 eflags: none 00:08:29.385 rdma_prtype: not specified 00:08:29.385 rdma_qptype: connected 00:08:29.385 rdma_cms: rdma-cm 00:08:29.385 rdma_pkey: 0x0000 00:08:29.385 =====Discovery Log Entry 4====== 00:08:29.385 trtype: rdma 00:08:29.385 adrfam: ipv4 00:08:29.385 subtype: nvme subsystem 00:08:29.385 treq: not required 00:08:29.385 portid: 0 00:08:29.385 trsvcid: 4420 00:08:29.385 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:29.385 traddr: 192.168.100.8 00:08:29.385 eflags: none 00:08:29.385 rdma_prtype: not specified 00:08:29.385 rdma_qptype: connected 00:08:29.385 rdma_cms: rdma-cm 00:08:29.385 rdma_pkey: 0x0000 00:08:29.385 =====Discovery Log Entry 5====== 00:08:29.385 trtype: rdma 00:08:29.385 adrfam: ipv4 00:08:29.385 subtype: discovery subsystem referral 00:08:29.385 treq: not required 00:08:29.385 portid: 0 00:08:29.385 trsvcid: 4430 00:08:29.385 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:29.385 traddr: 192.168.100.8 00:08:29.385 eflags: none 00:08:29.385 rdma_prtype: unrecognized 00:08:29.385 rdma_qptype: unrecognized 00:08:29.385 rdma_cms: unrecognized 00:08:29.385 rdma_pkey: 0x0000 00:08:29.385 11:31:58 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:29.385 Perform nvmf subsystem discovery via RPC 00:08:29.385 11:31:58 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:29.385 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.385 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.385 [2024-07-21 11:31:58.770493] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:29.385 [ 00:08:29.385 { 00:08:29.385 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:29.385 "subtype": "Discovery", 00:08:29.385 "listen_addresses": [ 00:08:29.385 { 00:08:29.385 "transport": "RDMA", 00:08:29.385 "trtype": "RDMA", 00:08:29.385 "adrfam": "IPv4", 00:08:29.385 "traddr": "192.168.100.8", 00:08:29.385 "trsvcid": "4420" 00:08:29.385 } 00:08:29.385 ], 00:08:29.385 "allow_any_host": true, 00:08:29.385 "hosts": [] 00:08:29.385 }, 00:08:29.385 { 00:08:29.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:29.385 "subtype": "NVMe", 00:08:29.385 "listen_addresses": [ 00:08:29.385 { 00:08:29.385 "transport": "RDMA", 00:08:29.385 "trtype": "RDMA", 00:08:29.385 "adrfam": "IPv4", 00:08:29.385 "traddr": "192.168.100.8", 00:08:29.385 "trsvcid": "4420" 00:08:29.385 } 00:08:29.385 ], 00:08:29.385 "allow_any_host": true, 00:08:29.385 "hosts": [], 00:08:29.385 "serial_number": "SPDK00000000000001", 00:08:29.385 "model_number": "SPDK bdev Controller", 00:08:29.385 "max_namespaces": 32, 00:08:29.385 "min_cntlid": 1, 00:08:29.385 "max_cntlid": 65519, 00:08:29.385 "namespaces": [ 00:08:29.385 { 00:08:29.385 "nsid": 1, 00:08:29.385 "bdev_name": "Null1", 00:08:29.385 "name": "Null1", 00:08:29.385 "nguid": "9A9E4818BE704C64AFA089214B0D4C68", 00:08:29.385 "uuid": "9a9e4818-be70-4c64-afa0-89214b0d4c68" 00:08:29.385 } 00:08:29.385 ] 00:08:29.385 }, 00:08:29.385 { 00:08:29.385 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:29.385 "subtype": "NVMe", 00:08:29.385 "listen_addresses": [ 00:08:29.385 { 00:08:29.385 "transport": "RDMA", 00:08:29.385 "trtype": "RDMA", 00:08:29.385 "adrfam": "IPv4", 00:08:29.385 "traddr": "192.168.100.8", 00:08:29.385 "trsvcid": "4420" 00:08:29.385 } 00:08:29.385 ], 00:08:29.385 "allow_any_host": true, 00:08:29.385 "hosts": [], 00:08:29.385 "serial_number": "SPDK00000000000002", 00:08:29.385 "model_number": "SPDK bdev Controller", 00:08:29.385 "max_namespaces": 32, 00:08:29.385 "min_cntlid": 1, 00:08:29.385 "max_cntlid": 65519, 00:08:29.385 "namespaces": [ 00:08:29.385 { 00:08:29.385 "nsid": 1, 00:08:29.385 "bdev_name": "Null2", 00:08:29.385 "name": "Null2", 00:08:29.385 "nguid": "AB96EE985C364E8BB63123CDFD03AB6D", 00:08:29.385 "uuid": "ab96ee98-5c36-4e8b-b631-23cdfd03ab6d" 00:08:29.385 } 00:08:29.385 ] 00:08:29.385 }, 00:08:29.385 { 00:08:29.385 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:29.385 "subtype": "NVMe", 00:08:29.385 "listen_addresses": [ 00:08:29.385 { 00:08:29.385 "transport": "RDMA", 00:08:29.385 "trtype": "RDMA", 00:08:29.385 "adrfam": "IPv4", 00:08:29.385 "traddr": "192.168.100.8", 00:08:29.385 "trsvcid": "4420" 00:08:29.385 } 00:08:29.385 ], 00:08:29.385 "allow_any_host": true, 00:08:29.385 "hosts": [], 00:08:29.385 "serial_number": "SPDK00000000000003", 00:08:29.385 "model_number": "SPDK bdev Controller", 00:08:29.385 "max_namespaces": 32, 00:08:29.385 "min_cntlid": 1, 00:08:29.385 "max_cntlid": 65519, 00:08:29.385 "namespaces": [ 00:08:29.385 { 00:08:29.385 "nsid": 1, 00:08:29.385 "bdev_name": "Null3", 00:08:29.385 "name": "Null3", 00:08:29.385 "nguid": "8BC69BC1666541B6967A8B6EDDC1A55B", 00:08:29.385 "uuid": "8bc69bc1-6665-41b6-967a-8b6eddc1a55b" 00:08:29.385 } 00:08:29.385 ] 00:08:29.385 }, 00:08:29.385 { 00:08:29.385 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:29.385 "subtype": "NVMe", 00:08:29.385 "listen_addresses": [ 00:08:29.385 { 00:08:29.385 "transport": "RDMA", 00:08:29.385 "trtype": "RDMA", 00:08:29.385 "adrfam": "IPv4", 00:08:29.385 "traddr": "192.168.100.8", 00:08:29.385 "trsvcid": "4420" 00:08:29.385 } 00:08:29.385 ], 00:08:29.385 "allow_any_host": true, 00:08:29.385 "hosts": [], 00:08:29.385 "serial_number": "SPDK00000000000004", 00:08:29.385 "model_number": "SPDK bdev Controller", 00:08:29.385 "max_namespaces": 32, 00:08:29.385 "min_cntlid": 1, 00:08:29.385 "max_cntlid": 65519, 00:08:29.385 "namespaces": [ 00:08:29.385 { 00:08:29.385 "nsid": 1, 00:08:29.385 "bdev_name": "Null4", 00:08:29.385 "name": "Null4", 00:08:29.385 "nguid": "622AFB42EFC344BAABDE0F07E66178D4", 00:08:29.385 "uuid": "622afb42-efc3-44ba-abde-0f07e66178d4" 00:08:29.385 } 00:08:29.385 ] 00:08:29.385 } 00:08:29.385 ] 00:08:29.385 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.385 11:31:58 -- target/discovery.sh@42 -- # seq 1 4 00:08:29.642 11:31:58 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:29.642 11:31:58 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:29.642 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.642 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.642 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.642 11:31:58 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:29.642 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.642 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.642 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.642 11:31:58 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:29.642 11:31:58 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:29.642 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.642 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.642 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.642 11:31:58 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:29.642 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.642 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.642 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.642 11:31:58 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:29.642 11:31:58 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:29.642 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.642 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.642 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.642 11:31:58 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:29.642 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.642 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.642 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.642 11:31:58 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:29.642 11:31:58 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:29.642 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.642 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.642 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.642 11:31:58 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:29.642 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.642 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.642 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.642 11:31:58 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:29.642 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.642 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.642 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.642 11:31:58 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:29.642 11:31:58 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:29.642 11:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.642 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:29.642 11:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.642 11:31:58 -- target/discovery.sh@49 -- # check_bdevs= 00:08:29.642 11:31:58 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:29.642 11:31:58 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:29.642 11:31:58 -- target/discovery.sh@57 -- # nvmftestfini 00:08:29.642 11:31:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:29.642 11:31:58 -- nvmf/common.sh@116 -- # sync 00:08:29.642 11:31:58 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:29.642 11:31:58 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:29.642 11:31:58 -- nvmf/common.sh@119 -- # set +e 00:08:29.642 11:31:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:29.642 11:31:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:29.642 rmmod nvme_rdma 00:08:29.642 rmmod nvme_fabrics 00:08:29.642 11:31:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:29.642 11:31:58 -- nvmf/common.sh@123 -- # set -e 00:08:29.642 11:31:58 -- nvmf/common.sh@124 -- # return 0 00:08:29.642 11:31:58 -- nvmf/common.sh@477 -- # '[' -n 2213122 ']' 00:08:29.642 11:31:58 -- nvmf/common.sh@478 -- # killprocess 2213122 00:08:29.642 11:31:58 -- common/autotest_common.sh@926 -- # '[' -z 2213122 ']' 00:08:29.642 11:31:58 -- common/autotest_common.sh@930 -- # kill -0 2213122 00:08:29.642 11:31:58 -- common/autotest_common.sh@931 -- # uname 00:08:29.642 11:31:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:29.642 11:31:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2213122 00:08:29.642 11:31:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:29.642 11:31:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:29.642 11:31:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2213122' 00:08:29.642 killing process with pid 2213122 00:08:29.642 11:31:59 -- common/autotest_common.sh@945 -- # kill 2213122 00:08:29.642 [2024-07-21 11:31:59.053206] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:29.642 11:31:59 -- common/autotest_common.sh@950 -- # wait 2213122 00:08:29.900 11:31:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:29.900 11:31:59 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:29.900 00:08:29.900 real 0m10.193s 00:08:29.900 user 0m9.019s 00:08:29.900 sys 0m6.728s 00:08:29.900 11:31:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.900 11:31:59 -- common/autotest_common.sh@10 -- # set +x 00:08:29.900 ************************************ 00:08:29.900 END TEST nvmf_discovery 00:08:29.900 ************************************ 00:08:30.157 11:31:59 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:30.157 11:31:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:30.157 11:31:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:30.157 11:31:59 -- common/autotest_common.sh@10 -- # set +x 00:08:30.157 ************************************ 00:08:30.157 START TEST nvmf_referrals 00:08:30.157 ************************************ 00:08:30.157 11:31:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:30.157 * Looking for test storage... 00:08:30.157 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:30.157 11:31:59 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.157 11:31:59 -- nvmf/common.sh@7 -- # uname -s 00:08:30.157 11:31:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.157 11:31:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.157 11:31:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.157 11:31:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.157 11:31:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.157 11:31:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.157 11:31:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.157 11:31:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.157 11:31:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.157 11:31:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.157 11:31:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:30.157 11:31:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:30.157 11:31:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.157 11:31:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.157 11:31:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.157 11:31:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:30.157 11:31:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.157 11:31:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.157 11:31:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.157 11:31:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.157 11:31:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.157 11:31:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.157 11:31:59 -- paths/export.sh@5 -- # export PATH 00:08:30.157 11:31:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.157 11:31:59 -- nvmf/common.sh@46 -- # : 0 00:08:30.157 11:31:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:30.157 11:31:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:30.157 11:31:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:30.157 11:31:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.157 11:31:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.157 11:31:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:30.157 11:31:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:30.157 11:31:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:30.157 11:31:59 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:30.157 11:31:59 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:30.157 11:31:59 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:30.157 11:31:59 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:30.157 11:31:59 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:30.157 11:31:59 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:30.157 11:31:59 -- target/referrals.sh@37 -- # nvmftestinit 00:08:30.157 11:31:59 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:30.157 11:31:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.157 11:31:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:30.157 11:31:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:30.157 11:31:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:30.157 11:31:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.157 11:31:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.157 11:31:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.157 11:31:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:30.157 11:31:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:30.157 11:31:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:30.157 11:31:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.256 11:32:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:38.256 11:32:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:38.256 11:32:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:38.256 11:32:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:38.256 11:32:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:38.256 11:32:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:38.256 11:32:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:38.256 11:32:07 -- nvmf/common.sh@294 -- # net_devs=() 00:08:38.256 11:32:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:38.256 11:32:07 -- nvmf/common.sh@295 -- # e810=() 00:08:38.256 11:32:07 -- nvmf/common.sh@295 -- # local -ga e810 00:08:38.256 11:32:07 -- nvmf/common.sh@296 -- # x722=() 00:08:38.256 11:32:07 -- nvmf/common.sh@296 -- # local -ga x722 00:08:38.256 11:32:07 -- nvmf/common.sh@297 -- # mlx=() 00:08:38.256 11:32:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:38.256 11:32:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.256 11:32:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.256 11:32:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.256 11:32:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.256 11:32:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.256 11:32:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.256 11:32:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.256 11:32:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.256 11:32:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.256 11:32:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.256 11:32:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.256 11:32:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:38.256 11:32:07 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:38.256 11:32:07 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:38.256 11:32:07 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:38.256 11:32:07 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:38.256 11:32:07 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:38.256 11:32:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:38.256 11:32:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:38.256 11:32:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:38.256 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:38.256 11:32:07 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:38.256 11:32:07 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:38.256 11:32:07 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:38.256 11:32:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:38.256 11:32:07 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:38.256 11:32:07 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.256 11:32:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:38.256 11:32:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:38.256 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:38.256 11:32:07 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:38.256 11:32:07 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:38.256 11:32:07 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:38.256 11:32:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:38.256 11:32:07 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:38.256 11:32:07 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.256 11:32:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:38.256 11:32:07 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:38.256 11:32:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:38.256 11:32:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.256 11:32:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:38.256 11:32:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.256 11:32:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:38.256 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:38.256 11:32:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.256 11:32:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:38.256 11:32:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.256 11:32:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:38.256 11:32:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.256 11:32:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:38.256 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:38.256 11:32:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.257 11:32:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:38.257 11:32:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:38.257 11:32:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:38.257 11:32:07 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:38.257 11:32:07 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:38.257 11:32:07 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:38.257 11:32:07 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:38.257 11:32:07 -- nvmf/common.sh@57 -- # uname 00:08:38.257 11:32:07 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:38.257 11:32:07 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:38.257 11:32:07 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:38.257 11:32:07 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:38.257 11:32:07 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:38.257 11:32:07 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:38.257 11:32:07 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:38.257 11:32:07 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:38.257 11:32:07 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:38.257 11:32:07 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:38.257 11:32:07 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:38.257 11:32:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:38.257 11:32:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:38.257 11:32:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:38.257 11:32:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:38.257 11:32:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:38.257 11:32:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:38.257 11:32:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.257 11:32:07 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:38.257 11:32:07 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:38.257 11:32:07 -- nvmf/common.sh@104 -- # continue 2 00:08:38.257 11:32:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:38.257 11:32:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.257 11:32:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:38.257 11:32:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.257 11:32:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:38.257 11:32:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:38.257 11:32:07 -- nvmf/common.sh@104 -- # continue 2 00:08:38.257 11:32:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:38.257 11:32:07 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:38.257 11:32:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:38.257 11:32:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:38.257 11:32:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:38.257 11:32:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:38.257 11:32:07 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:38.257 11:32:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:38.257 11:32:07 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:38.257 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:38.257 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:38.257 altname enp217s0f0np0 00:08:38.257 altname ens818f0np0 00:08:38.257 inet 192.168.100.8/24 scope global mlx_0_0 00:08:38.257 valid_lft forever preferred_lft forever 00:08:38.257 11:32:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:38.257 11:32:07 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:38.257 11:32:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:38.257 11:32:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:38.257 11:32:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:38.257 11:32:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:38.257 11:32:07 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:38.257 11:32:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:38.257 11:32:07 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:38.257 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:38.257 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:38.257 altname enp217s0f1np1 00:08:38.257 altname ens818f1np1 00:08:38.257 inet 192.168.100.9/24 scope global mlx_0_1 00:08:38.257 valid_lft forever preferred_lft forever 00:08:38.257 11:32:07 -- nvmf/common.sh@410 -- # return 0 00:08:38.257 11:32:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:38.257 11:32:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:38.257 11:32:07 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:38.257 11:32:07 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:38.257 11:32:07 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:38.257 11:32:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:38.257 11:32:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:38.257 11:32:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:38.257 11:32:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:38.257 11:32:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:38.257 11:32:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:38.257 11:32:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.257 11:32:07 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:38.257 11:32:07 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:38.257 11:32:07 -- nvmf/common.sh@104 -- # continue 2 00:08:38.257 11:32:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:38.257 11:32:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.257 11:32:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:38.257 11:32:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.257 11:32:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:38.257 11:32:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:38.257 11:32:07 -- nvmf/common.sh@104 -- # continue 2 00:08:38.257 11:32:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:38.257 11:32:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:38.257 11:32:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:38.257 11:32:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:38.257 11:32:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:38.257 11:32:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:38.257 11:32:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:38.257 11:32:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:38.257 11:32:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:38.257 11:32:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:38.257 11:32:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:38.257 11:32:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:38.257 11:32:07 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:38.257 192.168.100.9' 00:08:38.257 11:32:07 -- nvmf/common.sh@445 -- # head -n 1 00:08:38.257 11:32:07 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:38.257 192.168.100.9' 00:08:38.257 11:32:07 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:38.257 11:32:07 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:38.257 192.168.100.9' 00:08:38.257 11:32:07 -- nvmf/common.sh@446 -- # tail -n +2 00:08:38.257 11:32:07 -- nvmf/common.sh@446 -- # head -n 1 00:08:38.257 11:32:07 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:38.257 11:32:07 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:38.257 11:32:07 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:38.257 11:32:07 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:38.257 11:32:07 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:38.257 11:32:07 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:38.514 11:32:07 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:38.514 11:32:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:38.514 11:32:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:38.514 11:32:07 -- common/autotest_common.sh@10 -- # set +x 00:08:38.514 11:32:07 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.514 11:32:07 -- nvmf/common.sh@469 -- # nvmfpid=2217591 00:08:38.514 11:32:07 -- nvmf/common.sh@470 -- # waitforlisten 2217591 00:08:38.514 11:32:07 -- common/autotest_common.sh@819 -- # '[' -z 2217591 ']' 00:08:38.514 11:32:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.514 11:32:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:38.514 11:32:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.514 11:32:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:38.514 11:32:07 -- common/autotest_common.sh@10 -- # set +x 00:08:38.514 [2024-07-21 11:32:07.739327] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:38.514 [2024-07-21 11:32:07.739374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.514 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.514 [2024-07-21 11:32:07.824236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.514 [2024-07-21 11:32:07.862290] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:38.514 [2024-07-21 11:32:07.862399] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.514 [2024-07-21 11:32:07.862408] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.514 [2024-07-21 11:32:07.862416] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.514 [2024-07-21 11:32:07.862457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.514 [2024-07-21 11:32:07.862578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.514 [2024-07-21 11:32:07.862646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.514 [2024-07-21 11:32:07.862648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.443 11:32:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:39.443 11:32:08 -- common/autotest_common.sh@852 -- # return 0 00:08:39.443 11:32:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:39.443 11:32:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:39.443 11:32:08 -- common/autotest_common.sh@10 -- # set +x 00:08:39.443 11:32:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.443 11:32:08 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:39.443 11:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.443 11:32:08 -- common/autotest_common.sh@10 -- # set +x 00:08:39.443 [2024-07-21 11:32:08.634793] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e804b0/0x1e849a0) succeed. 00:08:39.443 [2024-07-21 11:32:08.645008] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e81aa0/0x1ec6030) succeed. 00:08:39.443 11:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.443 11:32:08 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:39.443 11:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.443 11:32:08 -- common/autotest_common.sh@10 -- # set +x 00:08:39.443 [2024-07-21 11:32:08.768645] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:39.443 11:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.443 11:32:08 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:39.443 11:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.443 11:32:08 -- common/autotest_common.sh@10 -- # set +x 00:08:39.443 11:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.443 11:32:08 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:39.443 11:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.443 11:32:08 -- common/autotest_common.sh@10 -- # set +x 00:08:39.444 11:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.444 11:32:08 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:39.444 11:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.444 11:32:08 -- common/autotest_common.sh@10 -- # set +x 00:08:39.444 11:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.444 11:32:08 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.444 11:32:08 -- target/referrals.sh@48 -- # jq length 00:08:39.444 11:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.444 11:32:08 -- common/autotest_common.sh@10 -- # set +x 00:08:39.444 11:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.444 11:32:08 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:39.444 11:32:08 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:39.444 11:32:08 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:39.444 11:32:08 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.444 11:32:08 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:39.444 11:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.444 11:32:08 -- common/autotest_common.sh@10 -- # set +x 00:08:39.444 11:32:08 -- target/referrals.sh@21 -- # sort 00:08:39.444 11:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.700 11:32:08 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:39.700 11:32:08 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:39.700 11:32:08 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:39.700 11:32:08 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:39.700 11:32:08 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:39.700 11:32:08 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:39.700 11:32:08 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:39.700 11:32:08 -- target/referrals.sh@26 -- # sort 00:08:39.700 11:32:08 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:39.700 11:32:08 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:39.700 11:32:08 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:39.700 11:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.700 11:32:09 -- common/autotest_common.sh@10 -- # set +x 00:08:39.700 11:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.700 11:32:09 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:39.700 11:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.700 11:32:09 -- common/autotest_common.sh@10 -- # set +x 00:08:39.700 11:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.700 11:32:09 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:39.700 11:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.700 11:32:09 -- common/autotest_common.sh@10 -- # set +x 00:08:39.700 11:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.700 11:32:09 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.700 11:32:09 -- target/referrals.sh@56 -- # jq length 00:08:39.700 11:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.700 11:32:09 -- common/autotest_common.sh@10 -- # set +x 00:08:39.700 11:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.700 11:32:09 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:39.700 11:32:09 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:39.700 11:32:09 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:39.700 11:32:09 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:39.700 11:32:09 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:39.700 11:32:09 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:39.700 11:32:09 -- target/referrals.sh@26 -- # sort 00:08:39.957 11:32:09 -- target/referrals.sh@26 -- # echo 00:08:39.957 11:32:09 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:39.957 11:32:09 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:39.957 11:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.957 11:32:09 -- common/autotest_common.sh@10 -- # set +x 00:08:39.957 11:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.957 11:32:09 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:39.957 11:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.957 11:32:09 -- common/autotest_common.sh@10 -- # set +x 00:08:39.957 11:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.957 11:32:09 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:39.957 11:32:09 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:39.957 11:32:09 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.957 11:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.957 11:32:09 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:39.957 11:32:09 -- common/autotest_common.sh@10 -- # set +x 00:08:39.957 11:32:09 -- target/referrals.sh@21 -- # sort 00:08:39.957 11:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.957 11:32:09 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:39.957 11:32:09 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:39.957 11:32:09 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:39.957 11:32:09 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:39.957 11:32:09 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:39.957 11:32:09 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:39.957 11:32:09 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:39.957 11:32:09 -- target/referrals.sh@26 -- # sort 00:08:39.957 11:32:09 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:39.957 11:32:09 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:39.957 11:32:09 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:39.957 11:32:09 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:39.957 11:32:09 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:39.957 11:32:09 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:39.957 11:32:09 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:40.214 11:32:09 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:40.214 11:32:09 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:40.214 11:32:09 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:40.214 11:32:09 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:40.214 11:32:09 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:40.214 11:32:09 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:40.214 11:32:09 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:40.214 11:32:09 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:40.214 11:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.214 11:32:09 -- common/autotest_common.sh@10 -- # set +x 00:08:40.214 11:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.214 11:32:09 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:40.214 11:32:09 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:40.214 11:32:09 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.214 11:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.214 11:32:09 -- common/autotest_common.sh@10 -- # set +x 00:08:40.214 11:32:09 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:40.214 11:32:09 -- target/referrals.sh@21 -- # sort 00:08:40.214 11:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.471 11:32:09 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:40.471 11:32:09 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:40.471 11:32:09 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:40.471 11:32:09 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.471 11:32:09 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.471 11:32:09 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:40.471 11:32:09 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.471 11:32:09 -- target/referrals.sh@26 -- # sort 00:08:40.471 11:32:09 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:40.471 11:32:09 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:40.471 11:32:09 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:40.471 11:32:09 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:40.471 11:32:09 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:40.471 11:32:09 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:40.471 11:32:09 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:40.471 11:32:09 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:40.471 11:32:09 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:40.471 11:32:09 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:40.471 11:32:09 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:40.471 11:32:09 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:40.471 11:32:09 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:40.727 11:32:09 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:40.727 11:32:09 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:40.727 11:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.727 11:32:09 -- common/autotest_common.sh@10 -- # set +x 00:08:40.727 11:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.727 11:32:09 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.727 11:32:09 -- target/referrals.sh@82 -- # jq length 00:08:40.727 11:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.727 11:32:09 -- common/autotest_common.sh@10 -- # set +x 00:08:40.727 11:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.727 11:32:10 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:40.727 11:32:10 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:40.727 11:32:10 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.727 11:32:10 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.727 11:32:10 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:40.727 11:32:10 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.727 11:32:10 -- target/referrals.sh@26 -- # sort 00:08:40.727 11:32:10 -- target/referrals.sh@26 -- # echo 00:08:40.727 11:32:10 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:40.727 11:32:10 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:40.727 11:32:10 -- target/referrals.sh@86 -- # nvmftestfini 00:08:40.727 11:32:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:40.727 11:32:10 -- nvmf/common.sh@116 -- # sync 00:08:40.728 11:32:10 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:40.728 11:32:10 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:40.728 11:32:10 -- nvmf/common.sh@119 -- # set +e 00:08:40.728 11:32:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:40.728 11:32:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:40.728 rmmod nvme_rdma 00:08:40.728 rmmod nvme_fabrics 00:08:40.985 11:32:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:40.985 11:32:10 -- nvmf/common.sh@123 -- # set -e 00:08:40.985 11:32:10 -- nvmf/common.sh@124 -- # return 0 00:08:40.985 11:32:10 -- nvmf/common.sh@477 -- # '[' -n 2217591 ']' 00:08:40.985 11:32:10 -- nvmf/common.sh@478 -- # killprocess 2217591 00:08:40.985 11:32:10 -- common/autotest_common.sh@926 -- # '[' -z 2217591 ']' 00:08:40.985 11:32:10 -- common/autotest_common.sh@930 -- # kill -0 2217591 00:08:40.985 11:32:10 -- common/autotest_common.sh@931 -- # uname 00:08:40.985 11:32:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:40.985 11:32:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2217591 00:08:40.985 11:32:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:40.985 11:32:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:40.985 11:32:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2217591' 00:08:40.985 killing process with pid 2217591 00:08:40.985 11:32:10 -- common/autotest_common.sh@945 -- # kill 2217591 00:08:40.985 11:32:10 -- common/autotest_common.sh@950 -- # wait 2217591 00:08:41.243 11:32:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:41.243 11:32:10 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:41.243 00:08:41.243 real 0m11.110s 00:08:41.243 user 0m13.348s 00:08:41.243 sys 0m7.073s 00:08:41.243 11:32:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.243 11:32:10 -- common/autotest_common.sh@10 -- # set +x 00:08:41.243 ************************************ 00:08:41.243 END TEST nvmf_referrals 00:08:41.243 ************************************ 00:08:41.243 11:32:10 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:41.243 11:32:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:41.243 11:32:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:41.243 11:32:10 -- common/autotest_common.sh@10 -- # set +x 00:08:41.243 ************************************ 00:08:41.243 START TEST nvmf_connect_disconnect 00:08:41.243 ************************************ 00:08:41.243 11:32:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:41.243 * Looking for test storage... 00:08:41.243 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:41.243 11:32:10 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.243 11:32:10 -- nvmf/common.sh@7 -- # uname -s 00:08:41.243 11:32:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.243 11:32:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.243 11:32:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.243 11:32:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.243 11:32:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.243 11:32:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.243 11:32:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.243 11:32:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.243 11:32:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.243 11:32:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.243 11:32:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:41.243 11:32:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:41.243 11:32:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.243 11:32:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.243 11:32:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.243 11:32:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:41.243 11:32:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.243 11:32:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.243 11:32:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.243 11:32:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.243 11:32:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.243 11:32:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.243 11:32:10 -- paths/export.sh@5 -- # export PATH 00:08:41.243 11:32:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.243 11:32:10 -- nvmf/common.sh@46 -- # : 0 00:08:41.243 11:32:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:41.243 11:32:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:41.243 11:32:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:41.243 11:32:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.243 11:32:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.243 11:32:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:41.243 11:32:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:41.243 11:32:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:41.243 11:32:10 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.243 11:32:10 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.243 11:32:10 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:41.243 11:32:10 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:41.243 11:32:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.243 11:32:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:41.243 11:32:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:41.243 11:32:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:41.243 11:32:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.243 11:32:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.243 11:32:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.243 11:32:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:41.243 11:32:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:41.243 11:32:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:41.243 11:32:10 -- common/autotest_common.sh@10 -- # set +x 00:08:49.370 11:32:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:49.370 11:32:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:49.370 11:32:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:49.370 11:32:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:49.370 11:32:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:49.370 11:32:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:49.370 11:32:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:49.370 11:32:18 -- nvmf/common.sh@294 -- # net_devs=() 00:08:49.370 11:32:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:49.370 11:32:18 -- nvmf/common.sh@295 -- # e810=() 00:08:49.370 11:32:18 -- nvmf/common.sh@295 -- # local -ga e810 00:08:49.370 11:32:18 -- nvmf/common.sh@296 -- # x722=() 00:08:49.370 11:32:18 -- nvmf/common.sh@296 -- # local -ga x722 00:08:49.370 11:32:18 -- nvmf/common.sh@297 -- # mlx=() 00:08:49.370 11:32:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:49.370 11:32:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.370 11:32:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.370 11:32:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.370 11:32:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.370 11:32:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.370 11:32:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.370 11:32:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.370 11:32:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.370 11:32:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.370 11:32:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.370 11:32:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.370 11:32:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:49.370 11:32:18 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:49.370 11:32:18 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:49.370 11:32:18 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:49.370 11:32:18 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:49.370 11:32:18 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:49.370 11:32:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:49.370 11:32:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:49.370 11:32:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:49.370 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:49.370 11:32:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:49.370 11:32:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:49.370 11:32:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:49.370 11:32:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:49.370 11:32:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:49.370 11:32:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:49.370 11:32:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:49.370 11:32:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:49.370 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:49.370 11:32:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:49.370 11:32:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:49.370 11:32:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:49.370 11:32:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:49.370 11:32:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:49.370 11:32:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:49.370 11:32:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:49.370 11:32:18 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:49.370 11:32:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:49.370 11:32:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.370 11:32:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:49.370 11:32:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.370 11:32:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:49.370 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:49.370 11:32:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.370 11:32:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:49.370 11:32:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.370 11:32:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:49.370 11:32:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.370 11:32:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:49.370 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:49.370 11:32:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.370 11:32:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:49.370 11:32:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:49.370 11:32:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:49.370 11:32:18 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:49.370 11:32:18 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:49.370 11:32:18 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:49.370 11:32:18 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:49.370 11:32:18 -- nvmf/common.sh@57 -- # uname 00:08:49.370 11:32:18 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:49.370 11:32:18 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:49.370 11:32:18 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:49.370 11:32:18 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:49.370 11:32:18 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:49.371 11:32:18 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:49.371 11:32:18 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:49.371 11:32:18 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:49.371 11:32:18 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:49.371 11:32:18 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:49.371 11:32:18 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:49.371 11:32:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:49.371 11:32:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:49.371 11:32:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:49.371 11:32:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:49.371 11:32:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:49.371 11:32:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:49.371 11:32:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.371 11:32:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:49.371 11:32:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:49.371 11:32:18 -- nvmf/common.sh@104 -- # continue 2 00:08:49.371 11:32:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:49.371 11:32:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.371 11:32:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:49.371 11:32:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.371 11:32:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:49.371 11:32:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:49.371 11:32:18 -- nvmf/common.sh@104 -- # continue 2 00:08:49.371 11:32:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:49.371 11:32:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:49.371 11:32:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:49.371 11:32:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:49.371 11:32:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:49.371 11:32:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:49.371 11:32:18 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:49.371 11:32:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:49.371 11:32:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:49.371 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:49.371 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:49.371 altname enp217s0f0np0 00:08:49.371 altname ens818f0np0 00:08:49.371 inet 192.168.100.8/24 scope global mlx_0_0 00:08:49.371 valid_lft forever preferred_lft forever 00:08:49.371 11:32:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:49.371 11:32:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:49.371 11:32:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:49.371 11:32:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:49.371 11:32:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:49.371 11:32:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:49.371 11:32:18 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:49.371 11:32:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:49.371 11:32:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:49.371 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:49.371 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:49.371 altname enp217s0f1np1 00:08:49.371 altname ens818f1np1 00:08:49.371 inet 192.168.100.9/24 scope global mlx_0_1 00:08:49.371 valid_lft forever preferred_lft forever 00:08:49.371 11:32:18 -- nvmf/common.sh@410 -- # return 0 00:08:49.371 11:32:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:49.371 11:32:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:49.371 11:32:18 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:49.371 11:32:18 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:49.371 11:32:18 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:49.371 11:32:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:49.371 11:32:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:49.371 11:32:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:49.371 11:32:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:49.371 11:32:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:49.371 11:32:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:49.371 11:32:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.371 11:32:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:49.371 11:32:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:49.371 11:32:18 -- nvmf/common.sh@104 -- # continue 2 00:08:49.371 11:32:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:49.371 11:32:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.371 11:32:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:49.371 11:32:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.371 11:32:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:49.371 11:32:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:49.371 11:32:18 -- nvmf/common.sh@104 -- # continue 2 00:08:49.371 11:32:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:49.371 11:32:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:49.371 11:32:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:49.371 11:32:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:49.371 11:32:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:49.371 11:32:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:49.371 11:32:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:49.371 11:32:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:49.371 11:32:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:49.371 11:32:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:49.371 11:32:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:49.371 11:32:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:49.371 11:32:18 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:49.371 192.168.100.9' 00:08:49.371 11:32:18 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:49.371 192.168.100.9' 00:08:49.371 11:32:18 -- nvmf/common.sh@445 -- # head -n 1 00:08:49.371 11:32:18 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:49.371 11:32:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:49.371 192.168.100.9' 00:08:49.371 11:32:18 -- nvmf/common.sh@446 -- # head -n 1 00:08:49.371 11:32:18 -- nvmf/common.sh@446 -- # tail -n +2 00:08:49.371 11:32:18 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:49.371 11:32:18 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:49.371 11:32:18 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:49.371 11:32:18 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:49.371 11:32:18 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:49.371 11:32:18 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:49.628 11:32:18 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:49.628 11:32:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:49.628 11:32:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:49.628 11:32:18 -- common/autotest_common.sh@10 -- # set +x 00:08:49.629 11:32:18 -- nvmf/common.sh@469 -- # nvmfpid=2222148 00:08:49.629 11:32:18 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.629 11:32:18 -- nvmf/common.sh@470 -- # waitforlisten 2222148 00:08:49.629 11:32:18 -- common/autotest_common.sh@819 -- # '[' -z 2222148 ']' 00:08:49.629 11:32:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.629 11:32:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:49.629 11:32:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.629 11:32:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:49.629 11:32:18 -- common/autotest_common.sh@10 -- # set +x 00:08:49.629 [2024-07-21 11:32:18.847688] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:49.629 [2024-07-21 11:32:18.847741] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.629 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.629 [2024-07-21 11:32:18.932335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.629 [2024-07-21 11:32:18.969448] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:49.629 [2024-07-21 11:32:18.969579] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.629 [2024-07-21 11:32:18.969589] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.629 [2024-07-21 11:32:18.969599] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.629 [2024-07-21 11:32:18.969674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.629 [2024-07-21 11:32:18.969768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.629 [2024-07-21 11:32:18.969855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.629 [2024-07-21 11:32:18.969857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.560 11:32:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:50.560 11:32:19 -- common/autotest_common.sh@852 -- # return 0 00:08:50.560 11:32:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:50.560 11:32:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:50.560 11:32:19 -- common/autotest_common.sh@10 -- # set +x 00:08:50.560 11:32:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.560 11:32:19 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:50.560 11:32:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.560 11:32:19 -- common/autotest_common.sh@10 -- # set +x 00:08:50.560 [2024-07-21 11:32:19.701026] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:50.560 [2024-07-21 11:32:19.723215] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10e44b0/0x10e89a0) succeed. 00:08:50.560 [2024-07-21 11:32:19.733715] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10e5aa0/0x112a030) succeed. 00:08:50.560 11:32:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.560 11:32:19 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:50.560 11:32:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.560 11:32:19 -- common/autotest_common.sh@10 -- # set +x 00:08:50.560 11:32:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.560 11:32:19 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:50.560 11:32:19 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:50.560 11:32:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.560 11:32:19 -- common/autotest_common.sh@10 -- # set +x 00:08:50.560 11:32:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.560 11:32:19 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:50.560 11:32:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.560 11:32:19 -- common/autotest_common.sh@10 -- # set +x 00:08:50.560 11:32:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.560 11:32:19 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:50.560 11:32:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.560 11:32:19 -- common/autotest_common.sh@10 -- # set +x 00:08:50.560 [2024-07-21 11:32:19.875069] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:50.560 11:32:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.560 11:32:19 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:50.560 11:32:19 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:50.560 11:32:19 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:50.560 11:32:19 -- target/connect_disconnect.sh@34 -- # set +x 00:08:53.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.349 11:37:34 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:05.349 11:37:34 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:05.349 11:37:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:05.349 11:37:34 -- nvmf/common.sh@116 -- # sync 00:14:05.349 11:37:34 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:05.349 11:37:34 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:05.349 11:37:34 -- nvmf/common.sh@119 -- # set +e 00:14:05.349 11:37:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:05.349 11:37:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:05.349 rmmod nvme_rdma 00:14:05.349 rmmod nvme_fabrics 00:14:05.349 11:37:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:05.349 11:37:34 -- nvmf/common.sh@123 -- # set -e 00:14:05.349 11:37:34 -- nvmf/common.sh@124 -- # return 0 00:14:05.349 11:37:34 -- nvmf/common.sh@477 -- # '[' -n 2222148 ']' 00:14:05.349 11:37:34 -- nvmf/common.sh@478 -- # killprocess 2222148 00:14:05.349 11:37:34 -- common/autotest_common.sh@926 -- # '[' -z 2222148 ']' 00:14:05.349 11:37:34 -- common/autotest_common.sh@930 -- # kill -0 2222148 00:14:05.349 11:37:34 -- common/autotest_common.sh@931 -- # uname 00:14:05.349 11:37:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:05.349 11:37:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2222148 00:14:05.349 11:37:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:05.349 11:37:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:05.349 11:37:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2222148' 00:14:05.349 killing process with pid 2222148 00:14:05.349 11:37:34 -- common/autotest_common.sh@945 -- # kill 2222148 00:14:05.349 11:37:34 -- common/autotest_common.sh@950 -- # wait 2222148 00:14:05.607 11:37:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:05.607 11:37:34 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:05.607 00:14:05.607 real 5m24.415s 00:14:05.607 user 21m0.860s 00:14:05.607 sys 0m18.283s 00:14:05.607 11:37:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:05.607 11:37:34 -- common/autotest_common.sh@10 -- # set +x 00:14:05.607 ************************************ 00:14:05.607 END TEST nvmf_connect_disconnect 00:14:05.607 ************************************ 00:14:05.607 11:37:34 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:05.607 11:37:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:05.607 11:37:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:05.607 11:37:34 -- common/autotest_common.sh@10 -- # set +x 00:14:05.607 ************************************ 00:14:05.607 START TEST nvmf_multitarget 00:14:05.607 ************************************ 00:14:05.607 11:37:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:05.865 * Looking for test storage... 00:14:05.865 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:05.865 11:37:35 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:05.865 11:37:35 -- nvmf/common.sh@7 -- # uname -s 00:14:05.865 11:37:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.865 11:37:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.865 11:37:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.865 11:37:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.865 11:37:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.865 11:37:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.865 11:37:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.865 11:37:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.865 11:37:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.865 11:37:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.865 11:37:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:05.865 11:37:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:05.865 11:37:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.865 11:37:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.865 11:37:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:05.865 11:37:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:05.865 11:37:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.865 11:37:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.865 11:37:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.866 11:37:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.866 11:37:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.866 11:37:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.866 11:37:35 -- paths/export.sh@5 -- # export PATH 00:14:05.866 11:37:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.866 11:37:35 -- nvmf/common.sh@46 -- # : 0 00:14:05.866 11:37:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:05.866 11:37:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:05.866 11:37:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:05.866 11:37:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.866 11:37:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.866 11:37:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:05.866 11:37:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:05.866 11:37:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:05.866 11:37:35 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:05.866 11:37:35 -- target/multitarget.sh@15 -- # nvmftestinit 00:14:05.866 11:37:35 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:05.866 11:37:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.866 11:37:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:05.866 11:37:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:05.866 11:37:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:05.866 11:37:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.866 11:37:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.866 11:37:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.866 11:37:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:05.866 11:37:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:05.866 11:37:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:05.866 11:37:35 -- common/autotest_common.sh@10 -- # set +x 00:14:14.010 11:37:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:14.010 11:37:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:14.010 11:37:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:14.010 11:37:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:14.010 11:37:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:14.010 11:37:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:14.010 11:37:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:14.010 11:37:42 -- nvmf/common.sh@294 -- # net_devs=() 00:14:14.010 11:37:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:14.010 11:37:42 -- nvmf/common.sh@295 -- # e810=() 00:14:14.010 11:37:42 -- nvmf/common.sh@295 -- # local -ga e810 00:14:14.010 11:37:42 -- nvmf/common.sh@296 -- # x722=() 00:14:14.010 11:37:42 -- nvmf/common.sh@296 -- # local -ga x722 00:14:14.010 11:37:42 -- nvmf/common.sh@297 -- # mlx=() 00:14:14.010 11:37:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:14.010 11:37:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.011 11:37:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.011 11:37:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.011 11:37:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.011 11:37:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.011 11:37:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.011 11:37:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.011 11:37:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.011 11:37:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.011 11:37:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.011 11:37:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.011 11:37:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:14.011 11:37:42 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:14.011 11:37:42 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:14.011 11:37:42 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:14.011 11:37:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:14.011 11:37:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:14.011 11:37:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:14.011 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:14.011 11:37:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:14.011 11:37:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:14.011 11:37:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:14.011 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:14.011 11:37:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:14.011 11:37:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:14.011 11:37:42 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:14.011 11:37:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.011 11:37:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:14.011 11:37:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.011 11:37:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:14.011 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:14.011 11:37:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.011 11:37:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:14.011 11:37:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.011 11:37:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:14.011 11:37:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.011 11:37:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:14.011 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:14.011 11:37:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.011 11:37:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:14.011 11:37:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:14.011 11:37:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:14.011 11:37:42 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:14.011 11:37:42 -- nvmf/common.sh@57 -- # uname 00:14:14.011 11:37:42 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:14.011 11:37:42 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:14.011 11:37:42 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:14.011 11:37:42 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:14.011 11:37:42 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:14.011 11:37:42 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:14.011 11:37:42 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:14.011 11:37:42 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:14.011 11:37:42 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:14.011 11:37:42 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:14.011 11:37:42 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:14.011 11:37:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:14.011 11:37:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:14.011 11:37:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:14.011 11:37:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:14.011 11:37:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:14.011 11:37:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:14.011 11:37:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.011 11:37:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:14.011 11:37:42 -- nvmf/common.sh@104 -- # continue 2 00:14:14.011 11:37:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:14.011 11:37:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.011 11:37:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.011 11:37:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:14.011 11:37:42 -- nvmf/common.sh@104 -- # continue 2 00:14:14.011 11:37:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:14.011 11:37:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:14.011 11:37:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:14.011 11:37:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:14.011 11:37:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:14.011 11:37:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:14.011 11:37:42 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:14.011 11:37:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:14.011 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:14.011 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:14.011 altname enp217s0f0np0 00:14:14.011 altname ens818f0np0 00:14:14.011 inet 192.168.100.8/24 scope global mlx_0_0 00:14:14.011 valid_lft forever preferred_lft forever 00:14:14.011 11:37:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:14.011 11:37:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:14.011 11:37:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:14.011 11:37:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:14.011 11:37:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:14.011 11:37:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:14.011 11:37:42 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:14.011 11:37:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:14.011 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:14.011 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:14.011 altname enp217s0f1np1 00:14:14.011 altname ens818f1np1 00:14:14.011 inet 192.168.100.9/24 scope global mlx_0_1 00:14:14.011 valid_lft forever preferred_lft forever 00:14:14.011 11:37:42 -- nvmf/common.sh@410 -- # return 0 00:14:14.011 11:37:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:14.011 11:37:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:14.011 11:37:42 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:14.011 11:37:42 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:14.011 11:37:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:14.011 11:37:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:14.011 11:37:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:14.011 11:37:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:14.011 11:37:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:14.011 11:37:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:14.011 11:37:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.011 11:37:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:14.011 11:37:42 -- nvmf/common.sh@104 -- # continue 2 00:14:14.011 11:37:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:14.011 11:37:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.011 11:37:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.011 11:37:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:14.011 11:37:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:14.011 11:37:42 -- nvmf/common.sh@104 -- # continue 2 00:14:14.011 11:37:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:14.011 11:37:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:14.011 11:37:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:14.011 11:37:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:14.011 11:37:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:14.011 11:37:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:14.011 11:37:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:14.011 11:37:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:14.011 11:37:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:14.011 11:37:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:14.011 11:37:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:14.011 11:37:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:14.011 11:37:42 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:14.011 192.168.100.9' 00:14:14.011 11:37:42 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:14.011 192.168.100.9' 00:14:14.011 11:37:42 -- nvmf/common.sh@445 -- # head -n 1 00:14:14.011 11:37:42 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:14.011 11:37:42 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:14.011 192.168.100.9' 00:14:14.011 11:37:42 -- nvmf/common.sh@446 -- # tail -n +2 00:14:14.011 11:37:42 -- nvmf/common.sh@446 -- # head -n 1 00:14:14.011 11:37:42 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:14.011 11:37:42 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:14.011 11:37:42 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:14.011 11:37:42 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:14.011 11:37:42 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:14.011 11:37:42 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:14.011 11:37:42 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:14.011 11:37:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:14.012 11:37:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:14.012 11:37:42 -- common/autotest_common.sh@10 -- # set +x 00:14:14.012 11:37:42 -- nvmf/common.sh@469 -- # nvmfpid=2283165 00:14:14.012 11:37:42 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:14.012 11:37:42 -- nvmf/common.sh@470 -- # waitforlisten 2283165 00:14:14.012 11:37:42 -- common/autotest_common.sh@819 -- # '[' -z 2283165 ']' 00:14:14.012 11:37:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.012 11:37:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:14.012 11:37:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.012 11:37:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:14.012 11:37:42 -- common/autotest_common.sh@10 -- # set +x 00:14:14.012 [2024-07-21 11:37:42.501690] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:14.012 [2024-07-21 11:37:42.501745] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.012 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.012 [2024-07-21 11:37:42.582280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:14.012 [2024-07-21 11:37:42.618839] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:14.012 [2024-07-21 11:37:42.618946] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.012 [2024-07-21 11:37:42.618955] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.012 [2024-07-21 11:37:42.618965] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.012 [2024-07-21 11:37:42.619060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.012 [2024-07-21 11:37:42.619159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.012 [2024-07-21 11:37:42.619246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:14.012 [2024-07-21 11:37:42.619247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.012 11:37:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:14.012 11:37:43 -- common/autotest_common.sh@852 -- # return 0 00:14:14.012 11:37:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:14.012 11:37:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:14.012 11:37:43 -- common/autotest_common.sh@10 -- # set +x 00:14:14.012 11:37:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.012 11:37:43 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:14.012 11:37:43 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:14.012 11:37:43 -- target/multitarget.sh@21 -- # jq length 00:14:14.269 11:37:43 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:14.269 11:37:43 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:14.269 "nvmf_tgt_1" 00:14:14.269 11:37:43 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:14.269 "nvmf_tgt_2" 00:14:14.269 11:37:43 -- target/multitarget.sh@28 -- # jq length 00:14:14.269 11:37:43 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:14.526 11:37:43 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:14.526 11:37:43 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:14.526 true 00:14:14.526 11:37:43 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:14.526 true 00:14:14.783 11:37:43 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:14.783 11:37:43 -- target/multitarget.sh@35 -- # jq length 00:14:14.783 11:37:44 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:14.783 11:37:44 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:14.783 11:37:44 -- target/multitarget.sh@41 -- # nvmftestfini 00:14:14.783 11:37:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:14.783 11:37:44 -- nvmf/common.sh@116 -- # sync 00:14:14.783 11:37:44 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:14.783 11:37:44 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:14.783 11:37:44 -- nvmf/common.sh@119 -- # set +e 00:14:14.783 11:37:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:14.783 11:37:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:14.783 rmmod nvme_rdma 00:14:14.783 rmmod nvme_fabrics 00:14:14.783 11:37:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:14.783 11:37:44 -- nvmf/common.sh@123 -- # set -e 00:14:14.783 11:37:44 -- nvmf/common.sh@124 -- # return 0 00:14:14.783 11:37:44 -- nvmf/common.sh@477 -- # '[' -n 2283165 ']' 00:14:14.783 11:37:44 -- nvmf/common.sh@478 -- # killprocess 2283165 00:14:14.784 11:37:44 -- common/autotest_common.sh@926 -- # '[' -z 2283165 ']' 00:14:14.784 11:37:44 -- common/autotest_common.sh@930 -- # kill -0 2283165 00:14:14.784 11:37:44 -- common/autotest_common.sh@931 -- # uname 00:14:14.784 11:37:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:14.784 11:37:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2283165 00:14:14.784 11:37:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:14.784 11:37:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:14.784 11:37:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2283165' 00:14:14.784 killing process with pid 2283165 00:14:14.784 11:37:44 -- common/autotest_common.sh@945 -- # kill 2283165 00:14:14.784 11:37:44 -- common/autotest_common.sh@950 -- # wait 2283165 00:14:15.042 11:37:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:15.042 11:37:44 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:15.042 00:14:15.042 real 0m9.347s 00:14:15.042 user 0m9.409s 00:14:15.042 sys 0m6.049s 00:14:15.042 11:37:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.042 11:37:44 -- common/autotest_common.sh@10 -- # set +x 00:14:15.042 ************************************ 00:14:15.042 END TEST nvmf_multitarget 00:14:15.042 ************************************ 00:14:15.042 11:37:44 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:15.042 11:37:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:15.042 11:37:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:15.042 11:37:44 -- common/autotest_common.sh@10 -- # set +x 00:14:15.042 ************************************ 00:14:15.042 START TEST nvmf_rpc 00:14:15.042 ************************************ 00:14:15.042 11:37:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:15.299 * Looking for test storage... 00:14:15.299 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:15.299 11:37:44 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.299 11:37:44 -- nvmf/common.sh@7 -- # uname -s 00:14:15.299 11:37:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.299 11:37:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.299 11:37:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.299 11:37:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.299 11:37:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.299 11:37:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.299 11:37:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.299 11:37:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.299 11:37:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.299 11:37:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.299 11:37:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:15.299 11:37:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:15.299 11:37:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.299 11:37:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.299 11:37:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.299 11:37:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:15.299 11:37:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.299 11:37:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.299 11:37:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.299 11:37:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.299 11:37:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.299 11:37:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.299 11:37:44 -- paths/export.sh@5 -- # export PATH 00:14:15.299 11:37:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.299 11:37:44 -- nvmf/common.sh@46 -- # : 0 00:14:15.299 11:37:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:15.299 11:37:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:15.299 11:37:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:15.299 11:37:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.299 11:37:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.299 11:37:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:15.299 11:37:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:15.299 11:37:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:15.299 11:37:44 -- target/rpc.sh@11 -- # loops=5 00:14:15.299 11:37:44 -- target/rpc.sh@23 -- # nvmftestinit 00:14:15.299 11:37:44 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:15.299 11:37:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.299 11:37:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:15.299 11:37:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:15.299 11:37:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:15.299 11:37:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.299 11:37:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.299 11:37:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.299 11:37:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:15.299 11:37:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:15.299 11:37:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:15.299 11:37:44 -- common/autotest_common.sh@10 -- # set +x 00:14:23.399 11:37:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:23.399 11:37:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:23.399 11:37:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:23.399 11:37:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:23.399 11:37:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:23.399 11:37:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:23.399 11:37:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:23.399 11:37:52 -- nvmf/common.sh@294 -- # net_devs=() 00:14:23.399 11:37:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:23.399 11:37:52 -- nvmf/common.sh@295 -- # e810=() 00:14:23.399 11:37:52 -- nvmf/common.sh@295 -- # local -ga e810 00:14:23.399 11:37:52 -- nvmf/common.sh@296 -- # x722=() 00:14:23.399 11:37:52 -- nvmf/common.sh@296 -- # local -ga x722 00:14:23.399 11:37:52 -- nvmf/common.sh@297 -- # mlx=() 00:14:23.399 11:37:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:23.399 11:37:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.399 11:37:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.399 11:37:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.399 11:37:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.399 11:37:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.399 11:37:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.399 11:37:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.399 11:37:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.399 11:37:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.399 11:37:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.399 11:37:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.399 11:37:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:23.399 11:37:52 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:23.399 11:37:52 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:23.399 11:37:52 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:23.399 11:37:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:23.399 11:37:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:23.399 11:37:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:23.399 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:23.399 11:37:52 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:23.399 11:37:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:23.399 11:37:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:23.399 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:23.399 11:37:52 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:23.399 11:37:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:23.399 11:37:52 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:23.399 11:37:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.399 11:37:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:23.399 11:37:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.399 11:37:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:23.399 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:23.399 11:37:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.399 11:37:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:23.399 11:37:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.399 11:37:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:23.399 11:37:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.399 11:37:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:23.399 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:23.399 11:37:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.399 11:37:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:23.399 11:37:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:23.399 11:37:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:23.399 11:37:52 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:23.399 11:37:52 -- nvmf/common.sh@57 -- # uname 00:14:23.399 11:37:52 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:23.399 11:37:52 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:23.399 11:37:52 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:23.399 11:37:52 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:23.399 11:37:52 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:23.399 11:37:52 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:23.399 11:37:52 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:23.399 11:37:52 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:23.399 11:37:52 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:23.399 11:37:52 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:23.399 11:37:52 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:23.399 11:37:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:23.399 11:37:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:23.399 11:37:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:23.399 11:37:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:23.399 11:37:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:23.399 11:37:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:23.399 11:37:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.399 11:37:52 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:23.399 11:37:52 -- nvmf/common.sh@104 -- # continue 2 00:14:23.399 11:37:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:23.399 11:37:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.399 11:37:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.399 11:37:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:23.399 11:37:52 -- nvmf/common.sh@104 -- # continue 2 00:14:23.399 11:37:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:23.399 11:37:52 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:23.399 11:37:52 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:23.399 11:37:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:23.399 11:37:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:23.399 11:37:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:23.399 11:37:52 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:23.399 11:37:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:23.399 11:37:52 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:23.399 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:23.399 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:23.399 altname enp217s0f0np0 00:14:23.399 altname ens818f0np0 00:14:23.399 inet 192.168.100.8/24 scope global mlx_0_0 00:14:23.399 valid_lft forever preferred_lft forever 00:14:23.399 11:37:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:23.399 11:37:52 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:23.399 11:37:52 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:23.657 11:37:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:23.657 11:37:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:23.657 11:37:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:23.657 11:37:52 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:23.657 11:37:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:23.657 11:37:52 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:23.657 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:23.657 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:23.657 altname enp217s0f1np1 00:14:23.657 altname ens818f1np1 00:14:23.657 inet 192.168.100.9/24 scope global mlx_0_1 00:14:23.657 valid_lft forever preferred_lft forever 00:14:23.657 11:37:52 -- nvmf/common.sh@410 -- # return 0 00:14:23.657 11:37:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:23.657 11:37:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:23.657 11:37:52 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:23.657 11:37:52 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:23.657 11:37:52 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:23.657 11:37:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:23.657 11:37:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:23.657 11:37:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:23.657 11:37:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:23.657 11:37:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:23.657 11:37:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:23.657 11:37:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.657 11:37:52 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:23.657 11:37:52 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:23.657 11:37:52 -- nvmf/common.sh@104 -- # continue 2 00:14:23.657 11:37:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:23.657 11:37:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.657 11:37:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:23.657 11:37:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.657 11:37:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:23.657 11:37:52 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:23.657 11:37:52 -- nvmf/common.sh@104 -- # continue 2 00:14:23.657 11:37:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:23.657 11:37:52 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:23.657 11:37:52 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:23.657 11:37:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:23.657 11:37:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:23.657 11:37:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:23.657 11:37:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:23.657 11:37:52 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:23.657 11:37:52 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:23.657 11:37:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:23.657 11:37:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:23.657 11:37:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:23.657 11:37:52 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:23.657 192.168.100.9' 00:14:23.657 11:37:52 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:23.657 192.168.100.9' 00:14:23.657 11:37:52 -- nvmf/common.sh@445 -- # head -n 1 00:14:23.657 11:37:52 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:23.657 11:37:52 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:23.657 192.168.100.9' 00:14:23.657 11:37:52 -- nvmf/common.sh@446 -- # tail -n +2 00:14:23.657 11:37:52 -- nvmf/common.sh@446 -- # head -n 1 00:14:23.657 11:37:52 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:23.657 11:37:52 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:23.657 11:37:52 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:23.657 11:37:52 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:23.657 11:37:52 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:23.657 11:37:52 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:23.657 11:37:52 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:23.657 11:37:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:23.657 11:37:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:23.657 11:37:52 -- common/autotest_common.sh@10 -- # set +x 00:14:23.657 11:37:52 -- nvmf/common.sh@469 -- # nvmfpid=2287547 00:14:23.657 11:37:52 -- nvmf/common.sh@470 -- # waitforlisten 2287547 00:14:23.657 11:37:52 -- common/autotest_common.sh@819 -- # '[' -z 2287547 ']' 00:14:23.657 11:37:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.657 11:37:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:23.657 11:37:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.657 11:37:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:23.657 11:37:52 -- common/autotest_common.sh@10 -- # set +x 00:14:23.657 11:37:52 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:23.657 [2024-07-21 11:37:52.985643] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:23.657 [2024-07-21 11:37:52.985697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.657 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.657 [2024-07-21 11:37:53.072470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.915 [2024-07-21 11:37:53.112009] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:23.915 [2024-07-21 11:37:53.112113] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.915 [2024-07-21 11:37:53.112122] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.915 [2024-07-21 11:37:53.112131] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.915 [2024-07-21 11:37:53.112169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.915 [2024-07-21 11:37:53.112284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.915 [2024-07-21 11:37:53.112303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.915 [2024-07-21 11:37:53.112304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.478 11:37:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:24.479 11:37:53 -- common/autotest_common.sh@852 -- # return 0 00:14:24.479 11:37:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:24.479 11:37:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:24.479 11:37:53 -- common/autotest_common.sh@10 -- # set +x 00:14:24.479 11:37:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.479 11:37:53 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:24.479 11:37:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.479 11:37:53 -- common/autotest_common.sh@10 -- # set +x 00:14:24.479 11:37:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.479 11:37:53 -- target/rpc.sh@26 -- # stats='{ 00:14:24.479 "tick_rate": 2500000000, 00:14:24.479 "poll_groups": [ 00:14:24.479 { 00:14:24.479 "name": "nvmf_tgt_poll_group_0", 00:14:24.479 "admin_qpairs": 0, 00:14:24.479 "io_qpairs": 0, 00:14:24.479 "current_admin_qpairs": 0, 00:14:24.479 "current_io_qpairs": 0, 00:14:24.479 "pending_bdev_io": 0, 00:14:24.479 "completed_nvme_io": 0, 00:14:24.479 "transports": [] 00:14:24.479 }, 00:14:24.479 { 00:14:24.479 "name": "nvmf_tgt_poll_group_1", 00:14:24.479 "admin_qpairs": 0, 00:14:24.479 "io_qpairs": 0, 00:14:24.479 "current_admin_qpairs": 0, 00:14:24.479 "current_io_qpairs": 0, 00:14:24.479 "pending_bdev_io": 0, 00:14:24.479 "completed_nvme_io": 0, 00:14:24.479 "transports": [] 00:14:24.479 }, 00:14:24.479 { 00:14:24.479 "name": "nvmf_tgt_poll_group_2", 00:14:24.479 "admin_qpairs": 0, 00:14:24.479 "io_qpairs": 0, 00:14:24.479 "current_admin_qpairs": 0, 00:14:24.479 "current_io_qpairs": 0, 00:14:24.479 "pending_bdev_io": 0, 00:14:24.479 "completed_nvme_io": 0, 00:14:24.479 "transports": [] 00:14:24.479 }, 00:14:24.479 { 00:14:24.479 "name": "nvmf_tgt_poll_group_3", 00:14:24.479 "admin_qpairs": 0, 00:14:24.479 "io_qpairs": 0, 00:14:24.479 "current_admin_qpairs": 0, 00:14:24.479 "current_io_qpairs": 0, 00:14:24.479 "pending_bdev_io": 0, 00:14:24.479 "completed_nvme_io": 0, 00:14:24.479 "transports": [] 00:14:24.479 } 00:14:24.479 ] 00:14:24.479 }' 00:14:24.479 11:37:53 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:24.479 11:37:53 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:24.479 11:37:53 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:24.479 11:37:53 -- target/rpc.sh@15 -- # wc -l 00:14:24.736 11:37:53 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:24.736 11:37:53 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:24.736 11:37:53 -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:24.737 11:37:53 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:24.737 11:37:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.737 11:37:53 -- common/autotest_common.sh@10 -- # set +x 00:14:24.737 [2024-07-21 11:37:53.964070] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e4b510/0x1e4fa00) succeed. 00:14:24.737 [2024-07-21 11:37:53.974134] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e4cb00/0x1e91090) succeed. 00:14:24.737 11:37:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.737 11:37:54 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:24.737 11:37:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.737 11:37:54 -- common/autotest_common.sh@10 -- # set +x 00:14:24.737 11:37:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.737 11:37:54 -- target/rpc.sh@33 -- # stats='{ 00:14:24.737 "tick_rate": 2500000000, 00:14:24.737 "poll_groups": [ 00:14:24.737 { 00:14:24.737 "name": "nvmf_tgt_poll_group_0", 00:14:24.737 "admin_qpairs": 0, 00:14:24.737 "io_qpairs": 0, 00:14:24.737 "current_admin_qpairs": 0, 00:14:24.737 "current_io_qpairs": 0, 00:14:24.737 "pending_bdev_io": 0, 00:14:24.737 "completed_nvme_io": 0, 00:14:24.737 "transports": [ 00:14:24.737 { 00:14:24.737 "trtype": "RDMA", 00:14:24.737 "pending_data_buffer": 0, 00:14:24.737 "devices": [ 00:14:24.737 { 00:14:24.737 "name": "mlx5_0", 00:14:24.737 "polls": 15627, 00:14:24.737 "idle_polls": 15627, 00:14:24.737 "completions": 0, 00:14:24.737 "requests": 0, 00:14:24.737 "request_latency": 0, 00:14:24.737 "pending_free_request": 0, 00:14:24.737 "pending_rdma_read": 0, 00:14:24.737 "pending_rdma_write": 0, 00:14:24.737 "pending_rdma_send": 0, 00:14:24.737 "total_send_wrs": 0, 00:14:24.737 "send_doorbell_updates": 0, 00:14:24.737 "total_recv_wrs": 4096, 00:14:24.737 "recv_doorbell_updates": 1 00:14:24.737 }, 00:14:24.737 { 00:14:24.737 "name": "mlx5_1", 00:14:24.737 "polls": 15627, 00:14:24.737 "idle_polls": 15627, 00:14:24.737 "completions": 0, 00:14:24.737 "requests": 0, 00:14:24.737 "request_latency": 0, 00:14:24.737 "pending_free_request": 0, 00:14:24.737 "pending_rdma_read": 0, 00:14:24.737 "pending_rdma_write": 0, 00:14:24.737 "pending_rdma_send": 0, 00:14:24.737 "total_send_wrs": 0, 00:14:24.737 "send_doorbell_updates": 0, 00:14:24.737 "total_recv_wrs": 4096, 00:14:24.737 "recv_doorbell_updates": 1 00:14:24.737 } 00:14:24.737 ] 00:14:24.737 } 00:14:24.737 ] 00:14:24.737 }, 00:14:24.737 { 00:14:24.737 "name": "nvmf_tgt_poll_group_1", 00:14:24.737 "admin_qpairs": 0, 00:14:24.737 "io_qpairs": 0, 00:14:24.737 "current_admin_qpairs": 0, 00:14:24.737 "current_io_qpairs": 0, 00:14:24.737 "pending_bdev_io": 0, 00:14:24.737 "completed_nvme_io": 0, 00:14:24.737 "transports": [ 00:14:24.737 { 00:14:24.737 "trtype": "RDMA", 00:14:24.737 "pending_data_buffer": 0, 00:14:24.737 "devices": [ 00:14:24.737 { 00:14:24.737 "name": "mlx5_0", 00:14:24.737 "polls": 9942, 00:14:24.737 "idle_polls": 9942, 00:14:24.737 "completions": 0, 00:14:24.737 "requests": 0, 00:14:24.737 "request_latency": 0, 00:14:24.737 "pending_free_request": 0, 00:14:24.737 "pending_rdma_read": 0, 00:14:24.737 "pending_rdma_write": 0, 00:14:24.737 "pending_rdma_send": 0, 00:14:24.737 "total_send_wrs": 0, 00:14:24.737 "send_doorbell_updates": 0, 00:14:24.737 "total_recv_wrs": 4096, 00:14:24.737 "recv_doorbell_updates": 1 00:14:24.737 }, 00:14:24.737 { 00:14:24.737 "name": "mlx5_1", 00:14:24.737 "polls": 9942, 00:14:24.737 "idle_polls": 9942, 00:14:24.737 "completions": 0, 00:14:24.737 "requests": 0, 00:14:24.737 "request_latency": 0, 00:14:24.737 "pending_free_request": 0, 00:14:24.737 "pending_rdma_read": 0, 00:14:24.737 "pending_rdma_write": 0, 00:14:24.737 "pending_rdma_send": 0, 00:14:24.737 "total_send_wrs": 0, 00:14:24.737 "send_doorbell_updates": 0, 00:14:24.737 "total_recv_wrs": 4096, 00:14:24.737 "recv_doorbell_updates": 1 00:14:24.737 } 00:14:24.737 ] 00:14:24.737 } 00:14:24.737 ] 00:14:24.737 }, 00:14:24.737 { 00:14:24.737 "name": "nvmf_tgt_poll_group_2", 00:14:24.737 "admin_qpairs": 0, 00:14:24.737 "io_qpairs": 0, 00:14:24.737 "current_admin_qpairs": 0, 00:14:24.737 "current_io_qpairs": 0, 00:14:24.737 "pending_bdev_io": 0, 00:14:24.737 "completed_nvme_io": 0, 00:14:24.737 "transports": [ 00:14:24.737 { 00:14:24.737 "trtype": "RDMA", 00:14:24.737 "pending_data_buffer": 0, 00:14:24.737 "devices": [ 00:14:24.737 { 00:14:24.737 "name": "mlx5_0", 00:14:24.737 "polls": 5700, 00:14:24.737 "idle_polls": 5700, 00:14:24.737 "completions": 0, 00:14:24.737 "requests": 0, 00:14:24.737 "request_latency": 0, 00:14:24.737 "pending_free_request": 0, 00:14:24.737 "pending_rdma_read": 0, 00:14:24.737 "pending_rdma_write": 0, 00:14:24.737 "pending_rdma_send": 0, 00:14:24.737 "total_send_wrs": 0, 00:14:24.737 "send_doorbell_updates": 0, 00:14:24.737 "total_recv_wrs": 4096, 00:14:24.737 "recv_doorbell_updates": 1 00:14:24.737 }, 00:14:24.737 { 00:14:24.737 "name": "mlx5_1", 00:14:24.737 "polls": 5700, 00:14:24.737 "idle_polls": 5700, 00:14:24.737 "completions": 0, 00:14:24.737 "requests": 0, 00:14:24.737 "request_latency": 0, 00:14:24.737 "pending_free_request": 0, 00:14:24.737 "pending_rdma_read": 0, 00:14:24.737 "pending_rdma_write": 0, 00:14:24.737 "pending_rdma_send": 0, 00:14:24.737 "total_send_wrs": 0, 00:14:24.737 "send_doorbell_updates": 0, 00:14:24.737 "total_recv_wrs": 4096, 00:14:24.737 "recv_doorbell_updates": 1 00:14:24.737 } 00:14:24.737 ] 00:14:24.737 } 00:14:24.737 ] 00:14:24.737 }, 00:14:24.737 { 00:14:24.737 "name": "nvmf_tgt_poll_group_3", 00:14:24.737 "admin_qpairs": 0, 00:14:24.737 "io_qpairs": 0, 00:14:24.737 "current_admin_qpairs": 0, 00:14:24.737 "current_io_qpairs": 0, 00:14:24.737 "pending_bdev_io": 0, 00:14:24.737 "completed_nvme_io": 0, 00:14:24.737 "transports": [ 00:14:24.737 { 00:14:24.737 "trtype": "RDMA", 00:14:24.737 "pending_data_buffer": 0, 00:14:24.737 "devices": [ 00:14:24.737 { 00:14:24.737 "name": "mlx5_0", 00:14:24.737 "polls": 884, 00:14:24.737 "idle_polls": 884, 00:14:24.737 "completions": 0, 00:14:24.737 "requests": 0, 00:14:24.737 "request_latency": 0, 00:14:24.737 "pending_free_request": 0, 00:14:24.737 "pending_rdma_read": 0, 00:14:24.737 "pending_rdma_write": 0, 00:14:24.737 "pending_rdma_send": 0, 00:14:24.737 "total_send_wrs": 0, 00:14:24.737 "send_doorbell_updates": 0, 00:14:24.737 "total_recv_wrs": 4096, 00:14:24.737 "recv_doorbell_updates": 1 00:14:24.737 }, 00:14:24.737 { 00:14:24.737 "name": "mlx5_1", 00:14:24.737 "polls": 884, 00:14:24.737 "idle_polls": 884, 00:14:24.737 "completions": 0, 00:14:24.737 "requests": 0, 00:14:24.737 "request_latency": 0, 00:14:24.737 "pending_free_request": 0, 00:14:24.737 "pending_rdma_read": 0, 00:14:24.737 "pending_rdma_write": 0, 00:14:24.737 "pending_rdma_send": 0, 00:14:24.737 "total_send_wrs": 0, 00:14:24.737 "send_doorbell_updates": 0, 00:14:24.737 "total_recv_wrs": 4096, 00:14:24.737 "recv_doorbell_updates": 1 00:14:24.737 } 00:14:24.737 ] 00:14:24.737 } 00:14:24.737 ] 00:14:24.737 } 00:14:24.737 ] 00:14:24.737 }' 00:14:24.737 11:37:54 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:24.737 11:37:54 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:24.737 11:37:54 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:24.737 11:37:54 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:24.994 11:37:54 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:24.994 11:37:54 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:24.994 11:37:54 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:24.994 11:37:54 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:24.994 11:37:54 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:24.994 11:37:54 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:24.994 11:37:54 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:14:24.994 11:37:54 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:14:24.994 11:37:54 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:14:24.994 11:37:54 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:14:24.994 11:37:54 -- target/rpc.sh@15 -- # wc -l 00:14:24.994 11:37:54 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:14:24.994 11:37:54 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:14:24.994 11:37:54 -- target/rpc.sh@41 -- # transport_type=RDMA 00:14:24.994 11:37:54 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:14:24.994 11:37:54 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:14:24.994 11:37:54 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:14:24.994 11:37:54 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:14:24.994 11:37:54 -- target/rpc.sh@15 -- # wc -l 00:14:24.994 11:37:54 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:14:24.994 11:37:54 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:24.994 11:37:54 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:24.994 11:37:54 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:24.994 11:37:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.994 11:37:54 -- common/autotest_common.sh@10 -- # set +x 00:14:24.994 Malloc1 00:14:24.994 11:37:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.994 11:37:54 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:24.994 11:37:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.994 11:37:54 -- common/autotest_common.sh@10 -- # set +x 00:14:24.994 11:37:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.994 11:37:54 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:24.994 11:37:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.994 11:37:54 -- common/autotest_common.sh@10 -- # set +x 00:14:24.994 11:37:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.994 11:37:54 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:24.994 11:37:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.994 11:37:54 -- common/autotest_common.sh@10 -- # set +x 00:14:25.251 11:37:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.251 11:37:54 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:25.251 11:37:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.251 11:37:54 -- common/autotest_common.sh@10 -- # set +x 00:14:25.251 [2024-07-21 11:37:54.424831] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:25.251 11:37:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.251 11:37:54 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:25.251 11:37:54 -- common/autotest_common.sh@640 -- # local es=0 00:14:25.251 11:37:54 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:25.251 11:37:54 -- common/autotest_common.sh@628 -- # local arg=nvme 00:14:25.251 11:37:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:25.251 11:37:54 -- common/autotest_common.sh@632 -- # type -t nvme 00:14:25.251 11:37:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:25.251 11:37:54 -- common/autotest_common.sh@634 -- # type -P nvme 00:14:25.251 11:37:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:25.251 11:37:54 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:14:25.251 11:37:54 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:14:25.251 11:37:54 -- common/autotest_common.sh@643 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:25.252 [2024-07-21 11:37:54.466675] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:14:25.252 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:25.252 could not add new controller: failed to write to nvme-fabrics device 00:14:25.252 11:37:54 -- common/autotest_common.sh@643 -- # es=1 00:14:25.252 11:37:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:25.252 11:37:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:25.252 11:37:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:25.252 11:37:54 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:25.252 11:37:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.252 11:37:54 -- common/autotest_common.sh@10 -- # set +x 00:14:25.252 11:37:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.252 11:37:54 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:26.182 11:37:55 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:26.182 11:37:55 -- common/autotest_common.sh@1177 -- # local i=0 00:14:26.182 11:37:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:26.182 11:37:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:26.182 11:37:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:28.083 11:37:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:28.083 11:37:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:28.083 11:37:57 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:28.340 11:37:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:28.340 11:37:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:28.340 11:37:57 -- common/autotest_common.sh@1187 -- # return 0 00:14:28.340 11:37:57 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.270 11:37:58 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:29.270 11:37:58 -- common/autotest_common.sh@1198 -- # local i=0 00:14:29.270 11:37:58 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:29.270 11:37:58 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.270 11:37:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.270 11:37:58 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:29.270 11:37:58 -- common/autotest_common.sh@1210 -- # return 0 00:14:29.270 11:37:58 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:29.270 11:37:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.270 11:37:58 -- common/autotest_common.sh@10 -- # set +x 00:14:29.270 11:37:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.271 11:37:58 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:29.271 11:37:58 -- common/autotest_common.sh@640 -- # local es=0 00:14:29.271 11:37:58 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:29.271 11:37:58 -- common/autotest_common.sh@628 -- # local arg=nvme 00:14:29.271 11:37:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:29.271 11:37:58 -- common/autotest_common.sh@632 -- # type -t nvme 00:14:29.271 11:37:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:29.271 11:37:58 -- common/autotest_common.sh@634 -- # type -P nvme 00:14:29.271 11:37:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:29.271 11:37:58 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:14:29.271 11:37:58 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:14:29.271 11:37:58 -- common/autotest_common.sh@643 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:29.271 [2024-07-21 11:37:58.548781] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:14:29.271 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:29.271 could not add new controller: failed to write to nvme-fabrics device 00:14:29.271 11:37:58 -- common/autotest_common.sh@643 -- # es=1 00:14:29.271 11:37:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:29.271 11:37:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:29.271 11:37:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:29.271 11:37:58 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:29.271 11:37:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.271 11:37:58 -- common/autotest_common.sh@10 -- # set +x 00:14:29.271 11:37:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.271 11:37:58 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:30.202 11:37:59 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:30.202 11:37:59 -- common/autotest_common.sh@1177 -- # local i=0 00:14:30.202 11:37:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:30.202 11:37:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:30.202 11:37:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:32.750 11:38:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:32.750 11:38:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:32.750 11:38:01 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:32.750 11:38:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:32.750 11:38:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:32.750 11:38:01 -- common/autotest_common.sh@1187 -- # return 0 00:14:32.750 11:38:01 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:33.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.329 11:38:02 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:33.329 11:38:02 -- common/autotest_common.sh@1198 -- # local i=0 00:14:33.329 11:38:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:33.329 11:38:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:33.329 11:38:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:33.329 11:38:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:33.329 11:38:02 -- common/autotest_common.sh@1210 -- # return 0 00:14:33.329 11:38:02 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:33.329 11:38:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.329 11:38:02 -- common/autotest_common.sh@10 -- # set +x 00:14:33.329 11:38:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.329 11:38:02 -- target/rpc.sh@81 -- # seq 1 5 00:14:33.329 11:38:02 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:33.329 11:38:02 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:33.329 11:38:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.329 11:38:02 -- common/autotest_common.sh@10 -- # set +x 00:14:33.329 11:38:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.329 11:38:02 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:33.329 11:38:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.329 11:38:02 -- common/autotest_common.sh@10 -- # set +x 00:14:33.329 [2024-07-21 11:38:02.619163] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:33.329 11:38:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.329 11:38:02 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:33.329 11:38:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.329 11:38:02 -- common/autotest_common.sh@10 -- # set +x 00:14:33.329 11:38:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.329 11:38:02 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:33.329 11:38:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.329 11:38:02 -- common/autotest_common.sh@10 -- # set +x 00:14:33.329 11:38:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.329 11:38:02 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:34.258 11:38:03 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:34.258 11:38:03 -- common/autotest_common.sh@1177 -- # local i=0 00:14:34.258 11:38:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:34.258 11:38:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:34.258 11:38:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:36.779 11:38:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:36.779 11:38:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:36.779 11:38:05 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:36.779 11:38:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:36.779 11:38:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:36.779 11:38:05 -- common/autotest_common.sh@1187 -- # return 0 00:14:36.779 11:38:05 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:37.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.345 11:38:06 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:37.345 11:38:06 -- common/autotest_common.sh@1198 -- # local i=0 00:14:37.345 11:38:06 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:37.345 11:38:06 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:37.345 11:38:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:37.345 11:38:06 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:37.345 11:38:06 -- common/autotest_common.sh@1210 -- # return 0 00:14:37.345 11:38:06 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:37.346 11:38:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.346 11:38:06 -- common/autotest_common.sh@10 -- # set +x 00:14:37.346 11:38:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.346 11:38:06 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:37.346 11:38:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.346 11:38:06 -- common/autotest_common.sh@10 -- # set +x 00:14:37.346 11:38:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.346 11:38:06 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:37.346 11:38:06 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:37.346 11:38:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.346 11:38:06 -- common/autotest_common.sh@10 -- # set +x 00:14:37.346 11:38:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.346 11:38:06 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:37.346 11:38:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.346 11:38:06 -- common/autotest_common.sh@10 -- # set +x 00:14:37.346 [2024-07-21 11:38:06.658673] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:37.346 11:38:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.346 11:38:06 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:37.346 11:38:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.346 11:38:06 -- common/autotest_common.sh@10 -- # set +x 00:14:37.346 11:38:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.346 11:38:06 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:37.346 11:38:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.346 11:38:06 -- common/autotest_common.sh@10 -- # set +x 00:14:37.346 11:38:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.346 11:38:06 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:38.279 11:38:07 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:38.279 11:38:07 -- common/autotest_common.sh@1177 -- # local i=0 00:14:38.279 11:38:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:38.279 11:38:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:38.279 11:38:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:40.806 11:38:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:40.806 11:38:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:40.806 11:38:09 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:40.806 11:38:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:40.806 11:38:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:40.806 11:38:09 -- common/autotest_common.sh@1187 -- # return 0 00:14:40.806 11:38:09 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:41.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.369 11:38:10 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:41.369 11:38:10 -- common/autotest_common.sh@1198 -- # local i=0 00:14:41.369 11:38:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:41.369 11:38:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:41.369 11:38:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:41.369 11:38:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:41.369 11:38:10 -- common/autotest_common.sh@1210 -- # return 0 00:14:41.369 11:38:10 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:41.369 11:38:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.369 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:41.369 11:38:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.369 11:38:10 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:41.369 11:38:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.369 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:41.369 11:38:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.369 11:38:10 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:41.369 11:38:10 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:41.369 11:38:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.369 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:41.369 11:38:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.369 11:38:10 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:41.369 11:38:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.369 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:41.369 [2024-07-21 11:38:10.698120] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:41.369 11:38:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.369 11:38:10 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:41.369 11:38:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.369 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:41.369 11:38:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.369 11:38:10 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:41.369 11:38:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.369 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:41.369 11:38:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.369 11:38:10 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:42.302 11:38:11 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:42.302 11:38:11 -- common/autotest_common.sh@1177 -- # local i=0 00:14:42.302 11:38:11 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:42.302 11:38:11 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:42.302 11:38:11 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:44.825 11:38:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:44.825 11:38:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:44.825 11:38:13 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:44.825 11:38:13 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:44.825 11:38:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:44.825 11:38:13 -- common/autotest_common.sh@1187 -- # return 0 00:14:44.825 11:38:13 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:45.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.390 11:38:14 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:45.390 11:38:14 -- common/autotest_common.sh@1198 -- # local i=0 00:14:45.390 11:38:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:45.390 11:38:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.390 11:38:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:45.390 11:38:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.390 11:38:14 -- common/autotest_common.sh@1210 -- # return 0 00:14:45.390 11:38:14 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:45.390 11:38:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.390 11:38:14 -- common/autotest_common.sh@10 -- # set +x 00:14:45.390 11:38:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.390 11:38:14 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.390 11:38:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.390 11:38:14 -- common/autotest_common.sh@10 -- # set +x 00:14:45.390 11:38:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.390 11:38:14 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:45.390 11:38:14 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:45.390 11:38:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.390 11:38:14 -- common/autotest_common.sh@10 -- # set +x 00:14:45.390 11:38:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.390 11:38:14 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:45.390 11:38:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.390 11:38:14 -- common/autotest_common.sh@10 -- # set +x 00:14:45.390 [2024-07-21 11:38:14.726601] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:45.390 11:38:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.390 11:38:14 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:45.390 11:38:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.390 11:38:14 -- common/autotest_common.sh@10 -- # set +x 00:14:45.390 11:38:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.390 11:38:14 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:45.390 11:38:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.390 11:38:14 -- common/autotest_common.sh@10 -- # set +x 00:14:45.390 11:38:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.390 11:38:14 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:46.321 11:38:15 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:46.321 11:38:15 -- common/autotest_common.sh@1177 -- # local i=0 00:14:46.321 11:38:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:46.321 11:38:15 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:46.321 11:38:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:48.842 11:38:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:48.842 11:38:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:48.842 11:38:17 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:48.842 11:38:17 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:48.842 11:38:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:48.842 11:38:17 -- common/autotest_common.sh@1187 -- # return 0 00:14:48.842 11:38:17 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:49.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.406 11:38:18 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:49.406 11:38:18 -- common/autotest_common.sh@1198 -- # local i=0 00:14:49.406 11:38:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:49.406 11:38:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.406 11:38:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:49.406 11:38:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.406 11:38:18 -- common/autotest_common.sh@1210 -- # return 0 00:14:49.406 11:38:18 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:49.406 11:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.406 11:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:49.406 11:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.406 11:38:18 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:49.406 11:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.406 11:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:49.406 11:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.406 11:38:18 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:49.406 11:38:18 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:49.406 11:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.406 11:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:49.406 11:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.406 11:38:18 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:49.406 11:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.406 11:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:49.406 [2024-07-21 11:38:18.746605] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:49.406 11:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.406 11:38:18 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:49.406 11:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.406 11:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:49.406 11:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.406 11:38:18 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:49.406 11:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.406 11:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:49.406 11:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.406 11:38:18 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:50.336 11:38:19 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:50.336 11:38:19 -- common/autotest_common.sh@1177 -- # local i=0 00:14:50.336 11:38:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:50.336 11:38:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:50.336 11:38:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:52.890 11:38:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:52.890 11:38:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:52.890 11:38:21 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:52.890 11:38:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:52.890 11:38:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:52.890 11:38:21 -- common/autotest_common.sh@1187 -- # return 0 00:14:52.890 11:38:21 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:53.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.454 11:38:22 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:53.454 11:38:22 -- common/autotest_common.sh@1198 -- # local i=0 00:14:53.454 11:38:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:53.454 11:38:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.454 11:38:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:53.454 11:38:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.454 11:38:22 -- common/autotest_common.sh@1210 -- # return 0 00:14:53.454 11:38:22 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:53.454 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.454 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.454 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.454 11:38:22 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.454 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.454 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.454 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.454 11:38:22 -- target/rpc.sh@99 -- # seq 1 5 00:14:53.454 11:38:22 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:53.454 11:38:22 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:53.454 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.454 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.455 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.455 11:38:22 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:53.455 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.455 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.455 [2024-07-21 11:38:22.797180] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:53.455 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.455 11:38:22 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:53.455 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.455 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.455 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.455 11:38:22 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:53.455 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.455 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.455 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.455 11:38:22 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.455 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.455 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.455 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.455 11:38:22 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.455 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.455 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.455 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.455 11:38:22 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:53.455 11:38:22 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:53.455 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.455 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.455 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.455 11:38:22 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:53.455 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.455 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.455 [2024-07-21 11:38:22.845318] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:53.455 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.455 11:38:22 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:53.455 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.455 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.455 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.455 11:38:22 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:53.455 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.455 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.455 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.455 11:38:22 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.455 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.455 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.455 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.455 11:38:22 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.455 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.455 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:22 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:53.724 11:38:22 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:53.724 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:22 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:53.724 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 [2024-07-21 11:38:22.893507] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:53.724 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:22 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:53.724 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:22 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:53.724 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:22 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.724 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:22 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.724 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:22 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:53.724 11:38:22 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:53.724 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:22 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:53.724 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 [2024-07-21 11:38:22.945694] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:53.724 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:22 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:53.724 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:22 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:53.724 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:22 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.724 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:22 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.724 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:22 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:53.724 11:38:22 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:53.724 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:22 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:53.724 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 [2024-07-21 11:38:22.993899] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:53.724 11:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:22 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:53.724 11:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 11:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:23 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:53.724 11:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:23 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 11:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:23 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.724 11:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:23 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 11:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:23 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.724 11:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:23 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 11:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.724 11:38:23 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:53.724 11:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.724 11:38:23 -- common/autotest_common.sh@10 -- # set +x 00:14:53.724 11:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.725 11:38:23 -- target/rpc.sh@110 -- # stats='{ 00:14:53.725 "tick_rate": 2500000000, 00:14:53.725 "poll_groups": [ 00:14:53.725 { 00:14:53.725 "name": "nvmf_tgt_poll_group_0", 00:14:53.725 "admin_qpairs": 2, 00:14:53.725 "io_qpairs": 27, 00:14:53.725 "current_admin_qpairs": 0, 00:14:53.725 "current_io_qpairs": 0, 00:14:53.725 "pending_bdev_io": 0, 00:14:53.725 "completed_nvme_io": 78, 00:14:53.725 "transports": [ 00:14:53.725 { 00:14:53.725 "trtype": "RDMA", 00:14:53.725 "pending_data_buffer": 0, 00:14:53.725 "devices": [ 00:14:53.725 { 00:14:53.725 "name": "mlx5_0", 00:14:53.725 "polls": 3404797, 00:14:53.725 "idle_polls": 3404554, 00:14:53.725 "completions": 263, 00:14:53.725 "requests": 131, 00:14:53.725 "request_latency": 21362046, 00:14:53.725 "pending_free_request": 0, 00:14:53.725 "pending_rdma_read": 0, 00:14:53.725 "pending_rdma_write": 0, 00:14:53.725 "pending_rdma_send": 0, 00:14:53.725 "total_send_wrs": 207, 00:14:53.725 "send_doorbell_updates": 120, 00:14:53.725 "total_recv_wrs": 4227, 00:14:53.725 "recv_doorbell_updates": 120 00:14:53.725 }, 00:14:53.725 { 00:14:53.725 "name": "mlx5_1", 00:14:53.725 "polls": 3404797, 00:14:53.725 "idle_polls": 3404797, 00:14:53.725 "completions": 0, 00:14:53.725 "requests": 0, 00:14:53.725 "request_latency": 0, 00:14:53.725 "pending_free_request": 0, 00:14:53.725 "pending_rdma_read": 0, 00:14:53.725 "pending_rdma_write": 0, 00:14:53.725 "pending_rdma_send": 0, 00:14:53.725 "total_send_wrs": 0, 00:14:53.725 "send_doorbell_updates": 0, 00:14:53.725 "total_recv_wrs": 4096, 00:14:53.725 "recv_doorbell_updates": 1 00:14:53.725 } 00:14:53.725 ] 00:14:53.725 } 00:14:53.725 ] 00:14:53.725 }, 00:14:53.725 { 00:14:53.725 "name": "nvmf_tgt_poll_group_1", 00:14:53.725 "admin_qpairs": 2, 00:14:53.725 "io_qpairs": 26, 00:14:53.725 "current_admin_qpairs": 0, 00:14:53.725 "current_io_qpairs": 0, 00:14:53.725 "pending_bdev_io": 0, 00:14:53.725 "completed_nvme_io": 77, 00:14:53.725 "transports": [ 00:14:53.725 { 00:14:53.725 "trtype": "RDMA", 00:14:53.725 "pending_data_buffer": 0, 00:14:53.725 "devices": [ 00:14:53.725 { 00:14:53.725 "name": "mlx5_0", 00:14:53.725 "polls": 3343501, 00:14:53.725 "idle_polls": 3343260, 00:14:53.725 "completions": 260, 00:14:53.725 "requests": 130, 00:14:53.725 "request_latency": 20810886, 00:14:53.725 "pending_free_request": 0, 00:14:53.725 "pending_rdma_read": 0, 00:14:53.725 "pending_rdma_write": 0, 00:14:53.725 "pending_rdma_send": 0, 00:14:53.725 "total_send_wrs": 206, 00:14:53.725 "send_doorbell_updates": 119, 00:14:53.725 "total_recv_wrs": 4226, 00:14:53.725 "recv_doorbell_updates": 120 00:14:53.725 }, 00:14:53.725 { 00:14:53.725 "name": "mlx5_1", 00:14:53.725 "polls": 3343501, 00:14:53.725 "idle_polls": 3343501, 00:14:53.725 "completions": 0, 00:14:53.725 "requests": 0, 00:14:53.725 "request_latency": 0, 00:14:53.725 "pending_free_request": 0, 00:14:53.725 "pending_rdma_read": 0, 00:14:53.725 "pending_rdma_write": 0, 00:14:53.725 "pending_rdma_send": 0, 00:14:53.725 "total_send_wrs": 0, 00:14:53.725 "send_doorbell_updates": 0, 00:14:53.725 "total_recv_wrs": 4096, 00:14:53.725 "recv_doorbell_updates": 1 00:14:53.725 } 00:14:53.725 ] 00:14:53.725 } 00:14:53.725 ] 00:14:53.725 }, 00:14:53.725 { 00:14:53.725 "name": "nvmf_tgt_poll_group_2", 00:14:53.725 "admin_qpairs": 1, 00:14:53.725 "io_qpairs": 26, 00:14:53.725 "current_admin_qpairs": 0, 00:14:53.725 "current_io_qpairs": 0, 00:14:53.725 "pending_bdev_io": 0, 00:14:53.725 "completed_nvme_io": 126, 00:14:53.725 "transports": [ 00:14:53.725 { 00:14:53.725 "trtype": "RDMA", 00:14:53.725 "pending_data_buffer": 0, 00:14:53.725 "devices": [ 00:14:53.725 { 00:14:53.725 "name": "mlx5_0", 00:14:53.725 "polls": 3454998, 00:14:53.725 "idle_polls": 3454730, 00:14:53.725 "completions": 307, 00:14:53.725 "requests": 153, 00:14:53.725 "request_latency": 33030380, 00:14:53.725 "pending_free_request": 0, 00:14:53.725 "pending_rdma_read": 0, 00:14:53.725 "pending_rdma_write": 0, 00:14:53.725 "pending_rdma_send": 0, 00:14:53.725 "total_send_wrs": 266, 00:14:53.725 "send_doorbell_updates": 130, 00:14:53.725 "total_recv_wrs": 4249, 00:14:53.725 "recv_doorbell_updates": 130 00:14:53.725 }, 00:14:53.725 { 00:14:53.725 "name": "mlx5_1", 00:14:53.725 "polls": 3454998, 00:14:53.725 "idle_polls": 3454998, 00:14:53.725 "completions": 0, 00:14:53.725 "requests": 0, 00:14:53.725 "request_latency": 0, 00:14:53.725 "pending_free_request": 0, 00:14:53.725 "pending_rdma_read": 0, 00:14:53.725 "pending_rdma_write": 0, 00:14:53.725 "pending_rdma_send": 0, 00:14:53.725 "total_send_wrs": 0, 00:14:53.725 "send_doorbell_updates": 0, 00:14:53.725 "total_recv_wrs": 4096, 00:14:53.725 "recv_doorbell_updates": 1 00:14:53.725 } 00:14:53.725 ] 00:14:53.725 } 00:14:53.725 ] 00:14:53.725 }, 00:14:53.725 { 00:14:53.725 "name": "nvmf_tgt_poll_group_3", 00:14:53.725 "admin_qpairs": 2, 00:14:53.725 "io_qpairs": 26, 00:14:53.725 "current_admin_qpairs": 0, 00:14:53.725 "current_io_qpairs": 0, 00:14:53.725 "pending_bdev_io": 0, 00:14:53.725 "completed_nvme_io": 174, 00:14:53.725 "transports": [ 00:14:53.725 { 00:14:53.725 "trtype": "RDMA", 00:14:53.725 "pending_data_buffer": 0, 00:14:53.725 "devices": [ 00:14:53.726 { 00:14:53.726 "name": "mlx5_0", 00:14:53.726 "polls": 2667247, 00:14:53.726 "idle_polls": 2666853, 00:14:53.726 "completions": 458, 00:14:53.726 "requests": 229, 00:14:53.726 "request_latency": 50840678, 00:14:53.726 "pending_free_request": 0, 00:14:53.726 "pending_rdma_read": 0, 00:14:53.726 "pending_rdma_write": 0, 00:14:53.726 "pending_rdma_send": 0, 00:14:53.726 "total_send_wrs": 403, 00:14:53.726 "send_doorbell_updates": 192, 00:14:53.726 "total_recv_wrs": 4325, 00:14:53.726 "recv_doorbell_updates": 193 00:14:53.726 }, 00:14:53.726 { 00:14:53.726 "name": "mlx5_1", 00:14:53.726 "polls": 2667247, 00:14:53.726 "idle_polls": 2667247, 00:14:53.726 "completions": 0, 00:14:53.726 "requests": 0, 00:14:53.726 "request_latency": 0, 00:14:53.726 "pending_free_request": 0, 00:14:53.726 "pending_rdma_read": 0, 00:14:53.726 "pending_rdma_write": 0, 00:14:53.726 "pending_rdma_send": 0, 00:14:53.726 "total_send_wrs": 0, 00:14:53.726 "send_doorbell_updates": 0, 00:14:53.726 "total_recv_wrs": 4096, 00:14:53.726 "recv_doorbell_updates": 1 00:14:53.726 } 00:14:53.726 ] 00:14:53.726 } 00:14:53.726 ] 00:14:53.726 } 00:14:53.726 ] 00:14:53.726 }' 00:14:53.726 11:38:23 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:53.726 11:38:23 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:53.726 11:38:23 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:53.726 11:38:23 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:53.726 11:38:23 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:53.726 11:38:23 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:53.726 11:38:23 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:53.726 11:38:23 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:53.726 11:38:23 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:53.988 11:38:23 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:14:53.988 11:38:23 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:14:53.988 11:38:23 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:14:53.988 11:38:23 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:14:53.988 11:38:23 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:14:53.988 11:38:23 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:53.988 11:38:23 -- target/rpc.sh@117 -- # (( 1288 > 0 )) 00:14:53.988 11:38:23 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:14:53.988 11:38:23 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:14:53.988 11:38:23 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:14:53.988 11:38:23 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:53.988 11:38:23 -- target/rpc.sh@118 -- # (( 126043990 > 0 )) 00:14:53.988 11:38:23 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:53.988 11:38:23 -- target/rpc.sh@123 -- # nvmftestfini 00:14:53.988 11:38:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:53.988 11:38:23 -- nvmf/common.sh@116 -- # sync 00:14:53.988 11:38:23 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:53.988 11:38:23 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:53.988 11:38:23 -- nvmf/common.sh@119 -- # set +e 00:14:53.988 11:38:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:53.988 11:38:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:53.988 rmmod nvme_rdma 00:14:53.988 rmmod nvme_fabrics 00:14:53.988 11:38:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:53.988 11:38:23 -- nvmf/common.sh@123 -- # set -e 00:14:53.988 11:38:23 -- nvmf/common.sh@124 -- # return 0 00:14:53.988 11:38:23 -- nvmf/common.sh@477 -- # '[' -n 2287547 ']' 00:14:53.988 11:38:23 -- nvmf/common.sh@478 -- # killprocess 2287547 00:14:53.988 11:38:23 -- common/autotest_common.sh@926 -- # '[' -z 2287547 ']' 00:14:53.988 11:38:23 -- common/autotest_common.sh@930 -- # kill -0 2287547 00:14:53.988 11:38:23 -- common/autotest_common.sh@931 -- # uname 00:14:53.988 11:38:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:53.988 11:38:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2287547 00:14:53.989 11:38:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:53.989 11:38:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:53.989 11:38:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2287547' 00:14:53.989 killing process with pid 2287547 00:14:53.989 11:38:23 -- common/autotest_common.sh@945 -- # kill 2287547 00:14:53.989 11:38:23 -- common/autotest_common.sh@950 -- # wait 2287547 00:14:54.246 11:38:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:54.246 11:38:23 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:54.246 00:14:54.246 real 0m39.273s 00:14:54.246 user 2m4.324s 00:14:54.246 sys 0m8.159s 00:14:54.246 11:38:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.246 11:38:23 -- common/autotest_common.sh@10 -- # set +x 00:14:54.246 ************************************ 00:14:54.246 END TEST nvmf_rpc 00:14:54.246 ************************************ 00:14:54.502 11:38:23 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:14:54.502 11:38:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:54.502 11:38:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:54.502 11:38:23 -- common/autotest_common.sh@10 -- # set +x 00:14:54.502 ************************************ 00:14:54.502 START TEST nvmf_invalid 00:14:54.502 ************************************ 00:14:54.502 11:38:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:14:54.502 * Looking for test storage... 00:14:54.502 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:54.502 11:38:23 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:54.502 11:38:23 -- nvmf/common.sh@7 -- # uname -s 00:14:54.502 11:38:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.502 11:38:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.502 11:38:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.502 11:38:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.502 11:38:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.502 11:38:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.502 11:38:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.502 11:38:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.502 11:38:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.502 11:38:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.502 11:38:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:54.502 11:38:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:54.502 11:38:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.502 11:38:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.502 11:38:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:54.502 11:38:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:54.502 11:38:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.502 11:38:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.502 11:38:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.502 11:38:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.502 11:38:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.502 11:38:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.502 11:38:23 -- paths/export.sh@5 -- # export PATH 00:14:54.502 11:38:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.502 11:38:23 -- nvmf/common.sh@46 -- # : 0 00:14:54.502 11:38:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:54.502 11:38:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:54.502 11:38:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:54.502 11:38:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.502 11:38:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.502 11:38:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:54.502 11:38:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:54.502 11:38:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:54.502 11:38:23 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:54.502 11:38:23 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:54.502 11:38:23 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:54.502 11:38:23 -- target/invalid.sh@14 -- # target=foobar 00:14:54.502 11:38:23 -- target/invalid.sh@16 -- # RANDOM=0 00:14:54.502 11:38:23 -- target/invalid.sh@34 -- # nvmftestinit 00:14:54.502 11:38:23 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:54.502 11:38:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.502 11:38:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:54.502 11:38:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:54.502 11:38:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:54.502 11:38:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.502 11:38:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.502 11:38:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.502 11:38:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:54.502 11:38:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:54.502 11:38:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:54.502 11:38:23 -- common/autotest_common.sh@10 -- # set +x 00:15:02.602 11:38:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:02.602 11:38:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:02.602 11:38:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:02.602 11:38:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:02.602 11:38:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:02.602 11:38:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:02.602 11:38:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:02.602 11:38:31 -- nvmf/common.sh@294 -- # net_devs=() 00:15:02.602 11:38:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:02.602 11:38:31 -- nvmf/common.sh@295 -- # e810=() 00:15:02.602 11:38:31 -- nvmf/common.sh@295 -- # local -ga e810 00:15:02.602 11:38:31 -- nvmf/common.sh@296 -- # x722=() 00:15:02.602 11:38:31 -- nvmf/common.sh@296 -- # local -ga x722 00:15:02.602 11:38:31 -- nvmf/common.sh@297 -- # mlx=() 00:15:02.602 11:38:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:02.602 11:38:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.602 11:38:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.602 11:38:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.602 11:38:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.602 11:38:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.602 11:38:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.602 11:38:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.602 11:38:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.602 11:38:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.602 11:38:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.602 11:38:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.602 11:38:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:02.602 11:38:31 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:02.602 11:38:31 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:02.602 11:38:31 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:02.602 11:38:31 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:02.602 11:38:31 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:02.602 11:38:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:02.603 11:38:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:02.603 11:38:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:02.603 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:02.603 11:38:31 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:02.603 11:38:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:02.603 11:38:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:02.603 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:02.603 11:38:31 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:02.603 11:38:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:02.603 11:38:31 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:02.603 11:38:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.603 11:38:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:02.603 11:38:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.603 11:38:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:02.603 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:02.603 11:38:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.603 11:38:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:02.603 11:38:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.603 11:38:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:02.603 11:38:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.603 11:38:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:02.603 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:02.603 11:38:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.603 11:38:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:02.603 11:38:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:02.603 11:38:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:02.603 11:38:31 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:02.603 11:38:31 -- nvmf/common.sh@57 -- # uname 00:15:02.603 11:38:31 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:02.603 11:38:31 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:02.603 11:38:31 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:02.603 11:38:31 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:02.603 11:38:31 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:02.603 11:38:31 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:02.603 11:38:31 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:02.603 11:38:31 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:02.603 11:38:31 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:02.603 11:38:31 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:02.603 11:38:31 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:02.603 11:38:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:02.603 11:38:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:02.603 11:38:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:02.603 11:38:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:02.603 11:38:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:02.603 11:38:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:02.603 11:38:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.603 11:38:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:02.603 11:38:31 -- nvmf/common.sh@104 -- # continue 2 00:15:02.603 11:38:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:02.603 11:38:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.603 11:38:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.603 11:38:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:02.603 11:38:31 -- nvmf/common.sh@104 -- # continue 2 00:15:02.603 11:38:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:02.603 11:38:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:02.603 11:38:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:02.603 11:38:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:02.603 11:38:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:02.603 11:38:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:02.603 11:38:31 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:02.603 11:38:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:02.603 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:02.603 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:02.603 altname enp217s0f0np0 00:15:02.603 altname ens818f0np0 00:15:02.603 inet 192.168.100.8/24 scope global mlx_0_0 00:15:02.603 valid_lft forever preferred_lft forever 00:15:02.603 11:38:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:02.603 11:38:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:02.603 11:38:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:02.603 11:38:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:02.603 11:38:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:02.603 11:38:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:02.603 11:38:31 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:02.603 11:38:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:02.603 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:02.603 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:02.603 altname enp217s0f1np1 00:15:02.603 altname ens818f1np1 00:15:02.603 inet 192.168.100.9/24 scope global mlx_0_1 00:15:02.603 valid_lft forever preferred_lft forever 00:15:02.603 11:38:31 -- nvmf/common.sh@410 -- # return 0 00:15:02.603 11:38:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:02.603 11:38:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:02.603 11:38:31 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:02.603 11:38:31 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:02.603 11:38:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:02.603 11:38:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:02.603 11:38:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:02.603 11:38:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:02.603 11:38:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:02.603 11:38:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:02.603 11:38:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.603 11:38:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:02.603 11:38:31 -- nvmf/common.sh@104 -- # continue 2 00:15:02.603 11:38:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:02.603 11:38:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.603 11:38:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.603 11:38:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:02.603 11:38:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:02.603 11:38:31 -- nvmf/common.sh@104 -- # continue 2 00:15:02.603 11:38:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:02.603 11:38:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:02.603 11:38:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:02.603 11:38:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:02.603 11:38:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:02.603 11:38:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:02.603 11:38:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:02.603 11:38:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:02.603 11:38:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:02.603 11:38:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:02.603 11:38:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:02.603 11:38:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:02.603 11:38:31 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:02.603 192.168.100.9' 00:15:02.603 11:38:31 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:02.603 192.168.100.9' 00:15:02.603 11:38:31 -- nvmf/common.sh@445 -- # head -n 1 00:15:02.603 11:38:31 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:02.603 11:38:31 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:02.603 192.168.100.9' 00:15:02.603 11:38:31 -- nvmf/common.sh@446 -- # tail -n +2 00:15:02.603 11:38:31 -- nvmf/common.sh@446 -- # head -n 1 00:15:02.603 11:38:31 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:02.603 11:38:31 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:02.603 11:38:31 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:02.603 11:38:31 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:02.603 11:38:31 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:02.603 11:38:31 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:02.603 11:38:31 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:02.603 11:38:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:02.603 11:38:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:02.603 11:38:31 -- common/autotest_common.sh@10 -- # set +x 00:15:02.603 11:38:31 -- nvmf/common.sh@469 -- # nvmfpid=2296898 00:15:02.603 11:38:31 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:02.603 11:38:31 -- nvmf/common.sh@470 -- # waitforlisten 2296898 00:15:02.603 11:38:31 -- common/autotest_common.sh@819 -- # '[' -z 2296898 ']' 00:15:02.603 11:38:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.603 11:38:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:02.603 11:38:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.603 11:38:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:02.603 11:38:31 -- common/autotest_common.sh@10 -- # set +x 00:15:02.603 [2024-07-21 11:38:31.909466] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:02.603 [2024-07-21 11:38:31.909513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.603 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.603 [2024-07-21 11:38:31.994325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.860 [2024-07-21 11:38:32.032977] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:02.860 [2024-07-21 11:38:32.033084] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.860 [2024-07-21 11:38:32.033094] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.860 [2024-07-21 11:38:32.033103] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.860 [2024-07-21 11:38:32.033343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.860 [2024-07-21 11:38:32.033363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.860 [2024-07-21 11:38:32.033450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.860 [2024-07-21 11:38:32.033452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.423 11:38:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:03.423 11:38:32 -- common/autotest_common.sh@852 -- # return 0 00:15:03.423 11:38:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:03.423 11:38:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:03.423 11:38:32 -- common/autotest_common.sh@10 -- # set +x 00:15:03.423 11:38:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.423 11:38:32 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:03.423 11:38:32 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20948 00:15:03.680 [2024-07-21 11:38:32.902396] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:03.680 11:38:32 -- target/invalid.sh@40 -- # out='request: 00:15:03.680 { 00:15:03.680 "nqn": "nqn.2016-06.io.spdk:cnode20948", 00:15:03.680 "tgt_name": "foobar", 00:15:03.680 "method": "nvmf_create_subsystem", 00:15:03.680 "req_id": 1 00:15:03.680 } 00:15:03.680 Got JSON-RPC error response 00:15:03.680 response: 00:15:03.680 { 00:15:03.680 "code": -32603, 00:15:03.680 "message": "Unable to find target foobar" 00:15:03.680 }' 00:15:03.680 11:38:32 -- target/invalid.sh@41 -- # [[ request: 00:15:03.680 { 00:15:03.680 "nqn": "nqn.2016-06.io.spdk:cnode20948", 00:15:03.680 "tgt_name": "foobar", 00:15:03.680 "method": "nvmf_create_subsystem", 00:15:03.680 "req_id": 1 00:15:03.680 } 00:15:03.680 Got JSON-RPC error response 00:15:03.680 response: 00:15:03.680 { 00:15:03.680 "code": -32603, 00:15:03.680 "message": "Unable to find target foobar" 00:15:03.680 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:03.680 11:38:32 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:03.680 11:38:32 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16397 00:15:03.680 [2024-07-21 11:38:33.079056] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16397: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:03.937 11:38:33 -- target/invalid.sh@45 -- # out='request: 00:15:03.937 { 00:15:03.937 "nqn": "nqn.2016-06.io.spdk:cnode16397", 00:15:03.937 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:03.937 "method": "nvmf_create_subsystem", 00:15:03.937 "req_id": 1 00:15:03.937 } 00:15:03.937 Got JSON-RPC error response 00:15:03.937 response: 00:15:03.937 { 00:15:03.937 "code": -32602, 00:15:03.937 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:03.937 }' 00:15:03.937 11:38:33 -- target/invalid.sh@46 -- # [[ request: 00:15:03.937 { 00:15:03.937 "nqn": "nqn.2016-06.io.spdk:cnode16397", 00:15:03.937 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:03.937 "method": "nvmf_create_subsystem", 00:15:03.937 "req_id": 1 00:15:03.937 } 00:15:03.937 Got JSON-RPC error response 00:15:03.937 response: 00:15:03.937 { 00:15:03.937 "code": -32602, 00:15:03.937 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:03.937 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:03.937 11:38:33 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:03.937 11:38:33 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode21755 00:15:03.937 [2024-07-21 11:38:33.263618] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21755: invalid model number 'SPDK_Controller' 00:15:03.937 11:38:33 -- target/invalid.sh@50 -- # out='request: 00:15:03.937 { 00:15:03.937 "nqn": "nqn.2016-06.io.spdk:cnode21755", 00:15:03.937 "model_number": "SPDK_Controller\u001f", 00:15:03.937 "method": "nvmf_create_subsystem", 00:15:03.937 "req_id": 1 00:15:03.937 } 00:15:03.937 Got JSON-RPC error response 00:15:03.937 response: 00:15:03.937 { 00:15:03.937 "code": -32602, 00:15:03.937 "message": "Invalid MN SPDK_Controller\u001f" 00:15:03.937 }' 00:15:03.937 11:38:33 -- target/invalid.sh@51 -- # [[ request: 00:15:03.937 { 00:15:03.937 "nqn": "nqn.2016-06.io.spdk:cnode21755", 00:15:03.937 "model_number": "SPDK_Controller\u001f", 00:15:03.937 "method": "nvmf_create_subsystem", 00:15:03.937 "req_id": 1 00:15:03.937 } 00:15:03.937 Got JSON-RPC error response 00:15:03.937 response: 00:15:03.937 { 00:15:03.937 "code": -32602, 00:15:03.937 "message": "Invalid MN SPDK_Controller\u001f" 00:15:03.937 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:03.937 11:38:33 -- target/invalid.sh@54 -- # gen_random_s 21 00:15:03.937 11:38:33 -- target/invalid.sh@19 -- # local length=21 ll 00:15:03.937 11:38:33 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:03.937 11:38:33 -- target/invalid.sh@21 -- # local chars 00:15:03.937 11:38:33 -- target/invalid.sh@22 -- # local string 00:15:03.937 11:38:33 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:03.937 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.937 11:38:33 -- target/invalid.sh@25 -- # printf %x 64 00:15:03.937 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:03.937 11:38:33 -- target/invalid.sh@25 -- # string+=@ 00:15:03.937 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.937 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.937 11:38:33 -- target/invalid.sh@25 -- # printf %x 87 00:15:03.937 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:03.937 11:38:33 -- target/invalid.sh@25 -- # string+=W 00:15:03.937 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.937 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.937 11:38:33 -- target/invalid.sh@25 -- # printf %x 82 00:15:03.938 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:03.938 11:38:33 -- target/invalid.sh@25 -- # string+=R 00:15:03.938 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.938 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.938 11:38:33 -- target/invalid.sh@25 -- # printf %x 83 00:15:03.938 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:03.938 11:38:33 -- target/invalid.sh@25 -- # string+=S 00:15:03.938 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.938 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.938 11:38:33 -- target/invalid.sh@25 -- # printf %x 51 00:15:03.938 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:03.938 11:38:33 -- target/invalid.sh@25 -- # string+=3 00:15:03.938 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.938 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.938 11:38:33 -- target/invalid.sh@25 -- # printf %x 117 00:15:03.938 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:03.938 11:38:33 -- target/invalid.sh@25 -- # string+=u 00:15:03.938 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.938 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.938 11:38:33 -- target/invalid.sh@25 -- # printf %x 87 00:15:03.938 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:03.938 11:38:33 -- target/invalid.sh@25 -- # string+=W 00:15:03.938 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.938 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.938 11:38:33 -- target/invalid.sh@25 -- # printf %x 56 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # string+=8 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # printf %x 118 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # string+=v 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # printf %x 127 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # string+=$'\177' 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # printf %x 52 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # string+=4 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # printf %x 127 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # string+=$'\177' 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # printf %x 84 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # string+=T 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # printf %x 65 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # string+=A 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # printf %x 114 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # string+=r 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # printf %x 72 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # string+=H 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # printf %x 114 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # string+=r 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # printf %x 108 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # string+=l 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # printf %x 122 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # string+=z 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # printf %x 79 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # string+=O 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # printf %x 39 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:04.194 11:38:33 -- target/invalid.sh@25 -- # string+=\' 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.194 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.194 11:38:33 -- target/invalid.sh@28 -- # [[ @ == \- ]] 00:15:04.194 11:38:33 -- target/invalid.sh@31 -- # echo '@WRS3uW8v4TArHrlzO'\''' 00:15:04.194 11:38:33 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '@WRS3uW8v4TArHrlzO'\''' nqn.2016-06.io.spdk:cnode8300 00:15:04.194 [2024-07-21 11:38:33.612793] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8300: invalid serial number '@WRS3uW8v4TArHrlzO'' 00:15:04.451 11:38:33 -- target/invalid.sh@54 -- # out='request: 00:15:04.451 { 00:15:04.451 "nqn": "nqn.2016-06.io.spdk:cnode8300", 00:15:04.451 "serial_number": "@WRS3uW8v\u007f4\u007fTArHrlzO'\''", 00:15:04.451 "method": "nvmf_create_subsystem", 00:15:04.451 "req_id": 1 00:15:04.451 } 00:15:04.451 Got JSON-RPC error response 00:15:04.451 response: 00:15:04.451 { 00:15:04.451 "code": -32602, 00:15:04.451 "message": "Invalid SN @WRS3uW8v\u007f4\u007fTArHrlzO'\''" 00:15:04.451 }' 00:15:04.451 11:38:33 -- target/invalid.sh@55 -- # [[ request: 00:15:04.451 { 00:15:04.451 "nqn": "nqn.2016-06.io.spdk:cnode8300", 00:15:04.451 "serial_number": "@WRS3uW8v\u007f4\u007fTArHrlzO'", 00:15:04.451 "method": "nvmf_create_subsystem", 00:15:04.451 "req_id": 1 00:15:04.451 } 00:15:04.451 Got JSON-RPC error response 00:15:04.451 response: 00:15:04.451 { 00:15:04.451 "code": -32602, 00:15:04.451 "message": "Invalid SN @WRS3uW8v\u007f4\u007fTArHrlzO'" 00:15:04.451 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:04.451 11:38:33 -- target/invalid.sh@58 -- # gen_random_s 41 00:15:04.451 11:38:33 -- target/invalid.sh@19 -- # local length=41 ll 00:15:04.451 11:38:33 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:04.451 11:38:33 -- target/invalid.sh@21 -- # local chars 00:15:04.451 11:38:33 -- target/invalid.sh@22 -- # local string 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 91 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+='[' 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 88 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+=X 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 75 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+=K 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 45 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+=- 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 43 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+=+ 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 65 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+=A 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 87 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+=W 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 97 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+=a 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 46 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+=. 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 68 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+=D 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 107 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+=k 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 72 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+=H 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 103 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+=g 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 49 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+=1 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 111 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+=o 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 61 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+== 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 124 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+='|' 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 34 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+='"' 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 117 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # string+=u 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.451 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.451 11:38:33 -- target/invalid.sh@25 -- # printf %x 96 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # string+='`' 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # printf %x 127 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # string+=$'\177' 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # printf %x 45 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # string+=- 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # printf %x 39 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # string+=\' 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # printf %x 51 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # string+=3 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # printf %x 35 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # string+='#' 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # printf %x 84 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # string+=T 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # printf %x 113 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # string+=q 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # printf %x 49 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # string+=1 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # printf %x 34 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:04.452 11:38:33 -- target/invalid.sh@25 -- # string+='"' 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.452 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # printf %x 112 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # string+=p 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # printf %x 40 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # string+='(' 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # printf %x 110 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # string+=n 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # printf %x 90 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # string+=Z 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # printf %x 44 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # string+=, 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # printf %x 120 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # string+=x 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # printf %x 88 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # string+=X 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # printf %x 110 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # string+=n 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # printf %x 119 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # string+=w 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # printf %x 80 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # string+=P 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # printf %x 43 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # string+=+ 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # printf %x 61 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:04.709 11:38:33 -- target/invalid.sh@25 -- # string+== 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:04.709 11:38:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:04.709 11:38:33 -- target/invalid.sh@28 -- # [[ [ == \- ]] 00:15:04.709 11:38:33 -- target/invalid.sh@31 -- # echo '[XK-+AWa.DkHg1o=|"u`-'\''3#Tq1"p(nZ,xXnwP+=' 00:15:04.709 11:38:33 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '[XK-+AWa.DkHg1o=|"u`-'\''3#Tq1"p(nZ,xXnwP+=' nqn.2016-06.io.spdk:cnode24912 00:15:04.709 [2024-07-21 11:38:34.118494] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24912: invalid model number '[XK-+AWa.DkHg1o=|"u`-'3#Tq1"p(nZ,xXnwP+=' 00:15:04.966 11:38:34 -- target/invalid.sh@58 -- # out='request: 00:15:04.966 { 00:15:04.966 "nqn": "nqn.2016-06.io.spdk:cnode24912", 00:15:04.966 "model_number": "[XK-+AWa.DkHg1o=|\"u`\u007f-'\''3#Tq1\"p(nZ,xXnwP+=", 00:15:04.966 "method": "nvmf_create_subsystem", 00:15:04.966 "req_id": 1 00:15:04.966 } 00:15:04.966 Got JSON-RPC error response 00:15:04.966 response: 00:15:04.966 { 00:15:04.966 "code": -32602, 00:15:04.966 "message": "Invalid MN [XK-+AWa.DkHg1o=|\"u`\u007f-'\''3#Tq1\"p(nZ,xXnwP+=" 00:15:04.966 }' 00:15:04.966 11:38:34 -- target/invalid.sh@59 -- # [[ request: 00:15:04.966 { 00:15:04.966 "nqn": "nqn.2016-06.io.spdk:cnode24912", 00:15:04.966 "model_number": "[XK-+AWa.DkHg1o=|\"u`\u007f-'3#Tq1\"p(nZ,xXnwP+=", 00:15:04.966 "method": "nvmf_create_subsystem", 00:15:04.966 "req_id": 1 00:15:04.966 } 00:15:04.966 Got JSON-RPC error response 00:15:04.966 response: 00:15:04.966 { 00:15:04.966 "code": -32602, 00:15:04.966 "message": "Invalid MN [XK-+AWa.DkHg1o=|\"u`\u007f-'3#Tq1\"p(nZ,xXnwP+=" 00:15:04.966 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:04.966 11:38:34 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:15:04.966 [2024-07-21 11:38:34.330300] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12fce10/0x1301300) succeed. 00:15:04.966 [2024-07-21 11:38:34.340616] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12fe400/0x1342990) succeed. 00:15:05.223 11:38:34 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:05.480 11:38:34 -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:15:05.480 11:38:34 -- target/invalid.sh@67 -- # echo '192.168.100.8 00:15:05.480 192.168.100.9' 00:15:05.480 11:38:34 -- target/invalid.sh@67 -- # head -n 1 00:15:05.480 11:38:34 -- target/invalid.sh@67 -- # IP=192.168.100.8 00:15:05.480 11:38:34 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:15:05.481 [2024-07-21 11:38:34.815709] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:05.481 11:38:34 -- target/invalid.sh@69 -- # out='request: 00:15:05.481 { 00:15:05.481 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:05.481 "listen_address": { 00:15:05.481 "trtype": "rdma", 00:15:05.481 "traddr": "192.168.100.8", 00:15:05.481 "trsvcid": "4421" 00:15:05.481 }, 00:15:05.481 "method": "nvmf_subsystem_remove_listener", 00:15:05.481 "req_id": 1 00:15:05.481 } 00:15:05.481 Got JSON-RPC error response 00:15:05.481 response: 00:15:05.481 { 00:15:05.481 "code": -32602, 00:15:05.481 "message": "Invalid parameters" 00:15:05.481 }' 00:15:05.481 11:38:34 -- target/invalid.sh@70 -- # [[ request: 00:15:05.481 { 00:15:05.481 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:05.481 "listen_address": { 00:15:05.481 "trtype": "rdma", 00:15:05.481 "traddr": "192.168.100.8", 00:15:05.481 "trsvcid": "4421" 00:15:05.481 }, 00:15:05.481 "method": "nvmf_subsystem_remove_listener", 00:15:05.481 "req_id": 1 00:15:05.481 } 00:15:05.481 Got JSON-RPC error response 00:15:05.481 response: 00:15:05.481 { 00:15:05.481 "code": -32602, 00:15:05.481 "message": "Invalid parameters" 00:15:05.481 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:05.481 11:38:34 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15601 -i 0 00:15:05.738 [2024-07-21 11:38:34.996297] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15601: invalid cntlid range [0-65519] 00:15:05.738 11:38:35 -- target/invalid.sh@73 -- # out='request: 00:15:05.738 { 00:15:05.738 "nqn": "nqn.2016-06.io.spdk:cnode15601", 00:15:05.738 "min_cntlid": 0, 00:15:05.738 "method": "nvmf_create_subsystem", 00:15:05.738 "req_id": 1 00:15:05.738 } 00:15:05.738 Got JSON-RPC error response 00:15:05.738 response: 00:15:05.738 { 00:15:05.738 "code": -32602, 00:15:05.738 "message": "Invalid cntlid range [0-65519]" 00:15:05.738 }' 00:15:05.738 11:38:35 -- target/invalid.sh@74 -- # [[ request: 00:15:05.738 { 00:15:05.738 "nqn": "nqn.2016-06.io.spdk:cnode15601", 00:15:05.738 "min_cntlid": 0, 00:15:05.738 "method": "nvmf_create_subsystem", 00:15:05.738 "req_id": 1 00:15:05.738 } 00:15:05.738 Got JSON-RPC error response 00:15:05.738 response: 00:15:05.738 { 00:15:05.738 "code": -32602, 00:15:05.738 "message": "Invalid cntlid range [0-65519]" 00:15:05.738 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:05.738 11:38:35 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22691 -i 65520 00:15:05.995 [2024-07-21 11:38:35.168980] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22691: invalid cntlid range [65520-65519] 00:15:05.995 11:38:35 -- target/invalid.sh@75 -- # out='request: 00:15:05.995 { 00:15:05.995 "nqn": "nqn.2016-06.io.spdk:cnode22691", 00:15:05.995 "min_cntlid": 65520, 00:15:05.995 "method": "nvmf_create_subsystem", 00:15:05.995 "req_id": 1 00:15:05.995 } 00:15:05.995 Got JSON-RPC error response 00:15:05.995 response: 00:15:05.995 { 00:15:05.995 "code": -32602, 00:15:05.995 "message": "Invalid cntlid range [65520-65519]" 00:15:05.995 }' 00:15:05.995 11:38:35 -- target/invalid.sh@76 -- # [[ request: 00:15:05.995 { 00:15:05.995 "nqn": "nqn.2016-06.io.spdk:cnode22691", 00:15:05.995 "min_cntlid": 65520, 00:15:05.995 "method": "nvmf_create_subsystem", 00:15:05.995 "req_id": 1 00:15:05.995 } 00:15:05.995 Got JSON-RPC error response 00:15:05.995 response: 00:15:05.995 { 00:15:05.995 "code": -32602, 00:15:05.995 "message": "Invalid cntlid range [65520-65519]" 00:15:05.995 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:05.995 11:38:35 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22542 -I 0 00:15:05.995 [2024-07-21 11:38:35.341596] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22542: invalid cntlid range [1-0] 00:15:05.995 11:38:35 -- target/invalid.sh@77 -- # out='request: 00:15:05.995 { 00:15:05.995 "nqn": "nqn.2016-06.io.spdk:cnode22542", 00:15:05.995 "max_cntlid": 0, 00:15:05.995 "method": "nvmf_create_subsystem", 00:15:05.995 "req_id": 1 00:15:05.995 } 00:15:05.995 Got JSON-RPC error response 00:15:05.995 response: 00:15:05.995 { 00:15:05.995 "code": -32602, 00:15:05.995 "message": "Invalid cntlid range [1-0]" 00:15:05.995 }' 00:15:05.995 11:38:35 -- target/invalid.sh@78 -- # [[ request: 00:15:05.995 { 00:15:05.995 "nqn": "nqn.2016-06.io.spdk:cnode22542", 00:15:05.995 "max_cntlid": 0, 00:15:05.995 "method": "nvmf_create_subsystem", 00:15:05.995 "req_id": 1 00:15:05.995 } 00:15:05.995 Got JSON-RPC error response 00:15:05.995 response: 00:15:05.995 { 00:15:05.995 "code": -32602, 00:15:05.995 "message": "Invalid cntlid range [1-0]" 00:15:05.995 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:05.995 11:38:35 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24514 -I 65520 00:15:06.252 [2024-07-21 11:38:35.518258] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24514: invalid cntlid range [1-65520] 00:15:06.252 11:38:35 -- target/invalid.sh@79 -- # out='request: 00:15:06.252 { 00:15:06.252 "nqn": "nqn.2016-06.io.spdk:cnode24514", 00:15:06.252 "max_cntlid": 65520, 00:15:06.252 "method": "nvmf_create_subsystem", 00:15:06.252 "req_id": 1 00:15:06.252 } 00:15:06.252 Got JSON-RPC error response 00:15:06.252 response: 00:15:06.252 { 00:15:06.252 "code": -32602, 00:15:06.252 "message": "Invalid cntlid range [1-65520]" 00:15:06.253 }' 00:15:06.253 11:38:35 -- target/invalid.sh@80 -- # [[ request: 00:15:06.253 { 00:15:06.253 "nqn": "nqn.2016-06.io.spdk:cnode24514", 00:15:06.253 "max_cntlid": 65520, 00:15:06.253 "method": "nvmf_create_subsystem", 00:15:06.253 "req_id": 1 00:15:06.253 } 00:15:06.253 Got JSON-RPC error response 00:15:06.253 response: 00:15:06.253 { 00:15:06.253 "code": -32602, 00:15:06.253 "message": "Invalid cntlid range [1-65520]" 00:15:06.253 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:06.253 11:38:35 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13152 -i 6 -I 5 00:15:06.554 [2024-07-21 11:38:35.706960] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13152: invalid cntlid range [6-5] 00:15:06.554 11:38:35 -- target/invalid.sh@83 -- # out='request: 00:15:06.554 { 00:15:06.554 "nqn": "nqn.2016-06.io.spdk:cnode13152", 00:15:06.554 "min_cntlid": 6, 00:15:06.554 "max_cntlid": 5, 00:15:06.554 "method": "nvmf_create_subsystem", 00:15:06.554 "req_id": 1 00:15:06.554 } 00:15:06.554 Got JSON-RPC error response 00:15:06.554 response: 00:15:06.554 { 00:15:06.554 "code": -32602, 00:15:06.554 "message": "Invalid cntlid range [6-5]" 00:15:06.554 }' 00:15:06.554 11:38:35 -- target/invalid.sh@84 -- # [[ request: 00:15:06.554 { 00:15:06.554 "nqn": "nqn.2016-06.io.spdk:cnode13152", 00:15:06.554 "min_cntlid": 6, 00:15:06.554 "max_cntlid": 5, 00:15:06.554 "method": "nvmf_create_subsystem", 00:15:06.554 "req_id": 1 00:15:06.554 } 00:15:06.554 Got JSON-RPC error response 00:15:06.554 response: 00:15:06.554 { 00:15:06.554 "code": -32602, 00:15:06.554 "message": "Invalid cntlid range [6-5]" 00:15:06.554 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:06.554 11:38:35 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:06.554 11:38:35 -- target/invalid.sh@87 -- # out='request: 00:15:06.554 { 00:15:06.554 "name": "foobar", 00:15:06.554 "method": "nvmf_delete_target", 00:15:06.554 "req_id": 1 00:15:06.554 } 00:15:06.554 Got JSON-RPC error response 00:15:06.554 response: 00:15:06.554 { 00:15:06.554 "code": -32602, 00:15:06.554 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:06.554 }' 00:15:06.554 11:38:35 -- target/invalid.sh@88 -- # [[ request: 00:15:06.554 { 00:15:06.554 "name": "foobar", 00:15:06.554 "method": "nvmf_delete_target", 00:15:06.554 "req_id": 1 00:15:06.554 } 00:15:06.554 Got JSON-RPC error response 00:15:06.554 response: 00:15:06.554 { 00:15:06.554 "code": -32602, 00:15:06.554 "message": "The specified target doesn't exist, cannot delete it." 00:15:06.554 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:06.554 11:38:35 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:06.554 11:38:35 -- target/invalid.sh@91 -- # nvmftestfini 00:15:06.554 11:38:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:06.554 11:38:35 -- nvmf/common.sh@116 -- # sync 00:15:06.554 11:38:35 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:06.554 11:38:35 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:06.554 11:38:35 -- nvmf/common.sh@119 -- # set +e 00:15:06.554 11:38:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:06.554 11:38:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:06.554 rmmod nvme_rdma 00:15:06.554 rmmod nvme_fabrics 00:15:06.554 11:38:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:06.554 11:38:35 -- nvmf/common.sh@123 -- # set -e 00:15:06.554 11:38:35 -- nvmf/common.sh@124 -- # return 0 00:15:06.554 11:38:35 -- nvmf/common.sh@477 -- # '[' -n 2296898 ']' 00:15:06.554 11:38:35 -- nvmf/common.sh@478 -- # killprocess 2296898 00:15:06.554 11:38:35 -- common/autotest_common.sh@926 -- # '[' -z 2296898 ']' 00:15:06.554 11:38:35 -- common/autotest_common.sh@930 -- # kill -0 2296898 00:15:06.554 11:38:35 -- common/autotest_common.sh@931 -- # uname 00:15:06.554 11:38:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:06.554 11:38:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2296898 00:15:06.554 11:38:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:06.554 11:38:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:06.554 11:38:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2296898' 00:15:06.554 killing process with pid 2296898 00:15:06.554 11:38:35 -- common/autotest_common.sh@945 -- # kill 2296898 00:15:06.554 11:38:35 -- common/autotest_common.sh@950 -- # wait 2296898 00:15:06.812 11:38:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:06.812 11:38:36 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:06.812 00:15:06.812 real 0m12.507s 00:15:06.812 user 0m20.858s 00:15:06.812 sys 0m7.324s 00:15:06.812 11:38:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.812 11:38:36 -- common/autotest_common.sh@10 -- # set +x 00:15:06.812 ************************************ 00:15:06.812 END TEST nvmf_invalid 00:15:06.812 ************************************ 00:15:07.069 11:38:36 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:07.069 11:38:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:07.069 11:38:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:07.069 11:38:36 -- common/autotest_common.sh@10 -- # set +x 00:15:07.069 ************************************ 00:15:07.069 START TEST nvmf_abort 00:15:07.069 ************************************ 00:15:07.069 11:38:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:07.069 * Looking for test storage... 00:15:07.069 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:07.069 11:38:36 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.069 11:38:36 -- nvmf/common.sh@7 -- # uname -s 00:15:07.069 11:38:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.069 11:38:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.069 11:38:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.069 11:38:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.069 11:38:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.069 11:38:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.069 11:38:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.069 11:38:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.069 11:38:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.069 11:38:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.069 11:38:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:07.069 11:38:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:07.069 11:38:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.069 11:38:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.069 11:38:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.069 11:38:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:07.069 11:38:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.069 11:38:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.069 11:38:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.069 11:38:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.070 11:38:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.070 11:38:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.070 11:38:36 -- paths/export.sh@5 -- # export PATH 00:15:07.070 11:38:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.070 11:38:36 -- nvmf/common.sh@46 -- # : 0 00:15:07.070 11:38:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:07.070 11:38:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:07.070 11:38:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:07.070 11:38:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.070 11:38:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.070 11:38:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:07.070 11:38:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:07.070 11:38:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:07.070 11:38:36 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:07.070 11:38:36 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:07.070 11:38:36 -- target/abort.sh@14 -- # nvmftestinit 00:15:07.070 11:38:36 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:07.070 11:38:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.070 11:38:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:07.070 11:38:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:07.070 11:38:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:07.070 11:38:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.070 11:38:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.070 11:38:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.070 11:38:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:07.070 11:38:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:07.070 11:38:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:07.070 11:38:36 -- common/autotest_common.sh@10 -- # set +x 00:15:15.193 11:38:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:15.193 11:38:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:15.193 11:38:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:15.193 11:38:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:15.193 11:38:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:15.193 11:38:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:15.193 11:38:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:15.193 11:38:44 -- nvmf/common.sh@294 -- # net_devs=() 00:15:15.193 11:38:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:15.193 11:38:44 -- nvmf/common.sh@295 -- # e810=() 00:15:15.193 11:38:44 -- nvmf/common.sh@295 -- # local -ga e810 00:15:15.193 11:38:44 -- nvmf/common.sh@296 -- # x722=() 00:15:15.193 11:38:44 -- nvmf/common.sh@296 -- # local -ga x722 00:15:15.193 11:38:44 -- nvmf/common.sh@297 -- # mlx=() 00:15:15.193 11:38:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:15.193 11:38:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:15.193 11:38:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:15.193 11:38:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:15.193 11:38:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:15.193 11:38:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:15.193 11:38:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:15.193 11:38:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:15.193 11:38:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:15.193 11:38:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:15.193 11:38:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:15.193 11:38:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:15.193 11:38:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:15.193 11:38:44 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:15.193 11:38:44 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:15.193 11:38:44 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:15.193 11:38:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:15.193 11:38:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:15.193 11:38:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:15.193 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:15.193 11:38:44 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:15.193 11:38:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:15.193 11:38:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:15.193 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:15.193 11:38:44 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:15.193 11:38:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:15.193 11:38:44 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:15.193 11:38:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.193 11:38:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:15.193 11:38:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.193 11:38:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:15.193 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:15.193 11:38:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.193 11:38:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:15.193 11:38:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.193 11:38:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:15.193 11:38:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.193 11:38:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:15.193 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:15.193 11:38:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.193 11:38:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:15.193 11:38:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:15.193 11:38:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:15.193 11:38:44 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:15.193 11:38:44 -- nvmf/common.sh@57 -- # uname 00:15:15.193 11:38:44 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:15.193 11:38:44 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:15.193 11:38:44 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:15.193 11:38:44 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:15.193 11:38:44 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:15.193 11:38:44 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:15.193 11:38:44 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:15.193 11:38:44 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:15.193 11:38:44 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:15.193 11:38:44 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:15.193 11:38:44 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:15.193 11:38:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:15.193 11:38:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:15.193 11:38:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:15.193 11:38:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:15.193 11:38:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:15.193 11:38:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:15.193 11:38:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:15.193 11:38:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:15.193 11:38:44 -- nvmf/common.sh@104 -- # continue 2 00:15:15.193 11:38:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:15.193 11:38:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:15.193 11:38:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:15.193 11:38:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:15.193 11:38:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:15.193 11:38:44 -- nvmf/common.sh@104 -- # continue 2 00:15:15.193 11:38:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:15.193 11:38:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:15.193 11:38:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:15.194 11:38:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:15.194 11:38:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:15.194 11:38:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:15.194 11:38:44 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:15.194 11:38:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:15.194 11:38:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:15.194 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:15.194 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:15.194 altname enp217s0f0np0 00:15:15.194 altname ens818f0np0 00:15:15.194 inet 192.168.100.8/24 scope global mlx_0_0 00:15:15.194 valid_lft forever preferred_lft forever 00:15:15.194 11:38:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:15.194 11:38:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:15.194 11:38:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:15.194 11:38:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:15.194 11:38:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:15.194 11:38:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:15.194 11:38:44 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:15.194 11:38:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:15.194 11:38:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:15.194 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:15.194 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:15.194 altname enp217s0f1np1 00:15:15.194 altname ens818f1np1 00:15:15.194 inet 192.168.100.9/24 scope global mlx_0_1 00:15:15.194 valid_lft forever preferred_lft forever 00:15:15.194 11:38:44 -- nvmf/common.sh@410 -- # return 0 00:15:15.194 11:38:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:15.194 11:38:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:15.194 11:38:44 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:15.194 11:38:44 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:15.194 11:38:44 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:15.194 11:38:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:15.194 11:38:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:15.194 11:38:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:15.194 11:38:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:15.194 11:38:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:15.194 11:38:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:15.194 11:38:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:15.194 11:38:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:15.194 11:38:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:15.194 11:38:44 -- nvmf/common.sh@104 -- # continue 2 00:15:15.194 11:38:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:15.194 11:38:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:15.194 11:38:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:15.194 11:38:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:15.194 11:38:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:15.194 11:38:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:15.194 11:38:44 -- nvmf/common.sh@104 -- # continue 2 00:15:15.194 11:38:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:15.194 11:38:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:15.194 11:38:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:15.194 11:38:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:15.194 11:38:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:15.194 11:38:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:15.194 11:38:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:15.194 11:38:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:15.194 11:38:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:15.194 11:38:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:15.194 11:38:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:15.194 11:38:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:15.194 11:38:44 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:15.194 192.168.100.9' 00:15:15.194 11:38:44 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:15.194 192.168.100.9' 00:15:15.194 11:38:44 -- nvmf/common.sh@445 -- # head -n 1 00:15:15.194 11:38:44 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:15.194 11:38:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:15.194 192.168.100.9' 00:15:15.194 11:38:44 -- nvmf/common.sh@446 -- # head -n 1 00:15:15.194 11:38:44 -- nvmf/common.sh@446 -- # tail -n +2 00:15:15.194 11:38:44 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:15.194 11:38:44 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:15.194 11:38:44 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:15.194 11:38:44 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:15.194 11:38:44 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:15.194 11:38:44 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:15.194 11:38:44 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:15.194 11:38:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:15.194 11:38:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:15.194 11:38:44 -- common/autotest_common.sh@10 -- # set +x 00:15:15.194 11:38:44 -- nvmf/common.sh@469 -- # nvmfpid=2301804 00:15:15.194 11:38:44 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:15.194 11:38:44 -- nvmf/common.sh@470 -- # waitforlisten 2301804 00:15:15.194 11:38:44 -- common/autotest_common.sh@819 -- # '[' -z 2301804 ']' 00:15:15.194 11:38:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.194 11:38:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:15.194 11:38:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.194 11:38:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:15.194 11:38:44 -- common/autotest_common.sh@10 -- # set +x 00:15:15.194 [2024-07-21 11:38:44.529483] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:15.194 [2024-07-21 11:38:44.529541] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.194 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.452 [2024-07-21 11:38:44.614827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:15.452 [2024-07-21 11:38:44.651741] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:15.452 [2024-07-21 11:38:44.651855] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.452 [2024-07-21 11:38:44.651864] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.452 [2024-07-21 11:38:44.651873] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.452 [2024-07-21 11:38:44.651979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.452 [2024-07-21 11:38:44.652009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.452 [2024-07-21 11:38:44.652010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.017 11:38:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:16.017 11:38:45 -- common/autotest_common.sh@852 -- # return 0 00:15:16.017 11:38:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:16.017 11:38:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:16.017 11:38:45 -- common/autotest_common.sh@10 -- # set +x 00:15:16.017 11:38:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.017 11:38:45 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:15:16.017 11:38:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.017 11:38:45 -- common/autotest_common.sh@10 -- # set +x 00:15:16.017 [2024-07-21 11:38:45.409254] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13c8cd0/0x13cd1c0) succeed. 00:15:16.017 [2024-07-21 11:38:45.419343] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13ca220/0x140e850) succeed. 00:15:16.274 11:38:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.275 11:38:45 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:16.275 11:38:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.275 11:38:45 -- common/autotest_common.sh@10 -- # set +x 00:15:16.275 Malloc0 00:15:16.275 11:38:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.275 11:38:45 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:16.275 11:38:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.275 11:38:45 -- common/autotest_common.sh@10 -- # set +x 00:15:16.275 Delay0 00:15:16.275 11:38:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.275 11:38:45 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:16.275 11:38:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.275 11:38:45 -- common/autotest_common.sh@10 -- # set +x 00:15:16.275 11:38:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.275 11:38:45 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:16.275 11:38:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.275 11:38:45 -- common/autotest_common.sh@10 -- # set +x 00:15:16.275 11:38:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.275 11:38:45 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:16.275 11:38:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.275 11:38:45 -- common/autotest_common.sh@10 -- # set +x 00:15:16.275 [2024-07-21 11:38:45.572878] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:16.275 11:38:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.275 11:38:45 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:16.275 11:38:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.275 11:38:45 -- common/autotest_common.sh@10 -- # set +x 00:15:16.275 11:38:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.275 11:38:45 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:16.275 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.275 [2024-07-21 11:38:45.665881] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:18.798 Initializing NVMe Controllers 00:15:18.798 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:15:18.798 controller IO queue size 128 less than required 00:15:18.798 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:18.798 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:18.798 Initialization complete. Launching workers. 00:15:18.798 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51663 00:15:18.798 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51724, failed to submit 62 00:15:18.798 success 51663, unsuccess 61, failed 0 00:15:18.798 11:38:47 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:18.798 11:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.798 11:38:47 -- common/autotest_common.sh@10 -- # set +x 00:15:18.798 11:38:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.798 11:38:47 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:18.798 11:38:47 -- target/abort.sh@38 -- # nvmftestfini 00:15:18.798 11:38:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:18.798 11:38:47 -- nvmf/common.sh@116 -- # sync 00:15:18.798 11:38:47 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:18.798 11:38:47 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:18.798 11:38:47 -- nvmf/common.sh@119 -- # set +e 00:15:18.798 11:38:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:18.798 11:38:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:18.798 rmmod nvme_rdma 00:15:18.798 rmmod nvme_fabrics 00:15:18.798 11:38:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:18.798 11:38:47 -- nvmf/common.sh@123 -- # set -e 00:15:18.798 11:38:47 -- nvmf/common.sh@124 -- # return 0 00:15:18.798 11:38:47 -- nvmf/common.sh@477 -- # '[' -n 2301804 ']' 00:15:18.798 11:38:47 -- nvmf/common.sh@478 -- # killprocess 2301804 00:15:18.798 11:38:47 -- common/autotest_common.sh@926 -- # '[' -z 2301804 ']' 00:15:18.798 11:38:47 -- common/autotest_common.sh@930 -- # kill -0 2301804 00:15:18.798 11:38:47 -- common/autotest_common.sh@931 -- # uname 00:15:18.798 11:38:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:18.798 11:38:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2301804 00:15:18.798 11:38:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:18.798 11:38:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:18.798 11:38:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2301804' 00:15:18.798 killing process with pid 2301804 00:15:18.798 11:38:47 -- common/autotest_common.sh@945 -- # kill 2301804 00:15:18.798 11:38:47 -- common/autotest_common.sh@950 -- # wait 2301804 00:15:18.798 11:38:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:18.798 11:38:48 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:18.798 00:15:18.798 real 0m11.875s 00:15:18.798 user 0m14.684s 00:15:18.798 sys 0m6.681s 00:15:18.798 11:38:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:18.798 11:38:48 -- common/autotest_common.sh@10 -- # set +x 00:15:18.798 ************************************ 00:15:18.798 END TEST nvmf_abort 00:15:18.798 ************************************ 00:15:18.798 11:38:48 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:18.798 11:38:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:18.798 11:38:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:18.798 11:38:48 -- common/autotest_common.sh@10 -- # set +x 00:15:18.798 ************************************ 00:15:18.798 START TEST nvmf_ns_hotplug_stress 00:15:18.798 ************************************ 00:15:18.798 11:38:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:19.054 * Looking for test storage... 00:15:19.054 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:19.054 11:38:48 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:19.054 11:38:48 -- nvmf/common.sh@7 -- # uname -s 00:15:19.054 11:38:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.054 11:38:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.054 11:38:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.054 11:38:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.054 11:38:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.054 11:38:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.054 11:38:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.054 11:38:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.054 11:38:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.054 11:38:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.054 11:38:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:19.054 11:38:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:19.054 11:38:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.054 11:38:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.054 11:38:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:19.054 11:38:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:19.054 11:38:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.054 11:38:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.054 11:38:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.054 11:38:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.054 11:38:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.055 11:38:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.055 11:38:48 -- paths/export.sh@5 -- # export PATH 00:15:19.055 11:38:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.055 11:38:48 -- nvmf/common.sh@46 -- # : 0 00:15:19.055 11:38:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:19.055 11:38:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:19.055 11:38:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:19.055 11:38:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.055 11:38:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.055 11:38:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:19.055 11:38:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:19.055 11:38:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:19.055 11:38:48 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:19.055 11:38:48 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:15:19.055 11:38:48 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:19.055 11:38:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.055 11:38:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:19.055 11:38:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:19.055 11:38:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:19.055 11:38:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.055 11:38:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.055 11:38:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.055 11:38:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:19.055 11:38:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:19.055 11:38:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:19.055 11:38:48 -- common/autotest_common.sh@10 -- # set +x 00:15:27.156 11:38:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:27.156 11:38:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:27.156 11:38:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:27.156 11:38:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:27.156 11:38:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:27.156 11:38:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:27.156 11:38:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:27.156 11:38:56 -- nvmf/common.sh@294 -- # net_devs=() 00:15:27.156 11:38:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:27.156 11:38:56 -- nvmf/common.sh@295 -- # e810=() 00:15:27.156 11:38:56 -- nvmf/common.sh@295 -- # local -ga e810 00:15:27.156 11:38:56 -- nvmf/common.sh@296 -- # x722=() 00:15:27.156 11:38:56 -- nvmf/common.sh@296 -- # local -ga x722 00:15:27.156 11:38:56 -- nvmf/common.sh@297 -- # mlx=() 00:15:27.156 11:38:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:27.156 11:38:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:27.156 11:38:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:27.156 11:38:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:27.156 11:38:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:27.156 11:38:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:27.156 11:38:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:27.156 11:38:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:27.156 11:38:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:27.156 11:38:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:27.156 11:38:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:27.156 11:38:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:27.156 11:38:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:27.156 11:38:56 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:27.156 11:38:56 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:27.156 11:38:56 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:27.156 11:38:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:27.156 11:38:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:27.156 11:38:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:27.156 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:27.156 11:38:56 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:27.156 11:38:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:27.156 11:38:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:27.156 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:27.156 11:38:56 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:27.156 11:38:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:27.156 11:38:56 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:27.156 11:38:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.156 11:38:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:27.156 11:38:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.156 11:38:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:27.156 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:27.156 11:38:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.156 11:38:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:27.156 11:38:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.156 11:38:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:27.156 11:38:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.156 11:38:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:27.156 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:27.156 11:38:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.156 11:38:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:27.156 11:38:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:27.156 11:38:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:27.156 11:38:56 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:27.156 11:38:56 -- nvmf/common.sh@57 -- # uname 00:15:27.156 11:38:56 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:27.156 11:38:56 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:27.156 11:38:56 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:27.156 11:38:56 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:27.156 11:38:56 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:27.156 11:38:56 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:27.156 11:38:56 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:27.156 11:38:56 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:27.156 11:38:56 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:27.156 11:38:56 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:27.156 11:38:56 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:27.156 11:38:56 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:27.156 11:38:56 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:27.156 11:38:56 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:27.156 11:38:56 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:27.156 11:38:56 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:27.156 11:38:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:27.156 11:38:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.156 11:38:56 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:27.156 11:38:56 -- nvmf/common.sh@104 -- # continue 2 00:15:27.156 11:38:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:27.156 11:38:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.156 11:38:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.156 11:38:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:27.156 11:38:56 -- nvmf/common.sh@104 -- # continue 2 00:15:27.156 11:38:56 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:27.156 11:38:56 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:27.156 11:38:56 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:27.156 11:38:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:27.156 11:38:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:27.156 11:38:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:27.156 11:38:56 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:27.156 11:38:56 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:27.156 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:27.156 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:27.156 altname enp217s0f0np0 00:15:27.156 altname ens818f0np0 00:15:27.156 inet 192.168.100.8/24 scope global mlx_0_0 00:15:27.156 valid_lft forever preferred_lft forever 00:15:27.156 11:38:56 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:27.156 11:38:56 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:27.156 11:38:56 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:27.156 11:38:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:27.156 11:38:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:27.156 11:38:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:27.156 11:38:56 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:27.156 11:38:56 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:27.156 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:27.156 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:27.156 altname enp217s0f1np1 00:15:27.156 altname ens818f1np1 00:15:27.156 inet 192.168.100.9/24 scope global mlx_0_1 00:15:27.156 valid_lft forever preferred_lft forever 00:15:27.156 11:38:56 -- nvmf/common.sh@410 -- # return 0 00:15:27.156 11:38:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:27.156 11:38:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:27.156 11:38:56 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:27.156 11:38:56 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:27.156 11:38:56 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:27.156 11:38:56 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:27.156 11:38:56 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:27.156 11:38:56 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:27.156 11:38:56 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:27.156 11:38:56 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:27.156 11:38:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:27.156 11:38:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.157 11:38:56 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:27.157 11:38:56 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:27.157 11:38:56 -- nvmf/common.sh@104 -- # continue 2 00:15:27.157 11:38:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:27.157 11:38:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.157 11:38:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:27.157 11:38:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.157 11:38:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:27.157 11:38:56 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:27.157 11:38:56 -- nvmf/common.sh@104 -- # continue 2 00:15:27.157 11:38:56 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:27.157 11:38:56 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:27.157 11:38:56 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:27.157 11:38:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:27.157 11:38:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:27.157 11:38:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:27.157 11:38:56 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:27.157 11:38:56 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:27.157 11:38:56 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:27.157 11:38:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:27.157 11:38:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:27.157 11:38:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:27.157 11:38:56 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:27.157 192.168.100.9' 00:15:27.157 11:38:56 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:27.157 192.168.100.9' 00:15:27.157 11:38:56 -- nvmf/common.sh@445 -- # head -n 1 00:15:27.157 11:38:56 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:27.157 11:38:56 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:27.157 192.168.100.9' 00:15:27.157 11:38:56 -- nvmf/common.sh@446 -- # tail -n +2 00:15:27.157 11:38:56 -- nvmf/common.sh@446 -- # head -n 1 00:15:27.157 11:38:56 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:27.157 11:38:56 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:27.157 11:38:56 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:27.157 11:38:56 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:27.157 11:38:56 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:27.157 11:38:56 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:27.157 11:38:56 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:15:27.157 11:38:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:27.157 11:38:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:27.157 11:38:56 -- common/autotest_common.sh@10 -- # set +x 00:15:27.157 11:38:56 -- nvmf/common.sh@469 -- # nvmfpid=2306543 00:15:27.157 11:38:56 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:27.157 11:38:56 -- nvmf/common.sh@470 -- # waitforlisten 2306543 00:15:27.157 11:38:56 -- common/autotest_common.sh@819 -- # '[' -z 2306543 ']' 00:15:27.157 11:38:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.157 11:38:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:27.157 11:38:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.157 11:38:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:27.157 11:38:56 -- common/autotest_common.sh@10 -- # set +x 00:15:27.157 [2024-07-21 11:38:56.532759] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:27.157 [2024-07-21 11:38:56.532809] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.157 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.414 [2024-07-21 11:38:56.614934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:27.414 [2024-07-21 11:38:56.650702] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:27.414 [2024-07-21 11:38:56.650821] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.414 [2024-07-21 11:38:56.650831] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.414 [2024-07-21 11:38:56.650842] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.414 [2024-07-21 11:38:56.650949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.414 [2024-07-21 11:38:56.650979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:27.414 [2024-07-21 11:38:56.650980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.978 11:38:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:27.978 11:38:57 -- common/autotest_common.sh@852 -- # return 0 00:15:27.978 11:38:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:27.978 11:38:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:27.978 11:38:57 -- common/autotest_common.sh@10 -- # set +x 00:15:27.978 11:38:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.978 11:38:57 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:15:27.978 11:38:57 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:28.236 [2024-07-21 11:38:57.552933] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbedcd0/0xbf21c0) succeed. 00:15:28.236 [2024-07-21 11:38:57.562870] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbef220/0xc33850) succeed. 00:15:28.494 11:38:57 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:28.494 11:38:57 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:28.750 [2024-07-21 11:38:58.015674] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:28.750 11:38:58 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:29.008 11:38:58 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:15:29.008 Malloc0 00:15:29.008 11:38:58 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:29.265 Delay0 00:15:29.265 11:38:58 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:29.523 11:38:58 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:15:29.523 NULL1 00:15:29.523 11:38:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:29.780 11:38:59 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2306926 00:15:29.780 11:38:59 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:15:29.780 11:38:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:29.780 11:38:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.780 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.153 Read completed with error (sct=0, sc=11) 00:15:31.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.153 11:39:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:31.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.153 11:39:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:15:31.153 11:39:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:15:31.153 true 00:15:31.409 11:39:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:31.409 11:39:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:32.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.338 11:39:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:32.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.338 11:39:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:15:32.338 11:39:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:15:32.338 true 00:15:32.593 11:39:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:32.593 11:39:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.539 11:39:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:33.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.539 11:39:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:15:33.539 11:39:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:15:33.539 true 00:15:33.539 11:39:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:33.539 11:39:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.473 11:39:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:34.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.741 11:39:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:15:34.741 11:39:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:15:34.741 true 00:15:35.014 11:39:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:35.014 11:39:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.578 11:39:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:35.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.835 11:39:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:15:35.835 11:39:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:36.092 true 00:15:36.092 11:39:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:36.092 11:39:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.024 11:39:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:37.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.024 11:39:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:15:37.024 11:39:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:37.281 true 00:15:37.281 11:39:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:37.281 11:39:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.213 11:39:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:38.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.213 11:39:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:38.213 11:39:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:38.470 true 00:15:38.470 11:39:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:38.470 11:39:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.402 11:39:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:39.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:39.402 11:39:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:39.402 11:39:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:39.659 true 00:15:39.659 11:39:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:39.659 11:39:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:40.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.612 11:39:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:40.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.612 11:39:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:40.612 11:39:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:40.870 true 00:15:40.870 11:39:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:40.870 11:39:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.801 11:39:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:41.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.801 11:39:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:41.801 11:39:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:42.059 true 00:15:42.059 11:39:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:42.059 11:39:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:42.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.990 11:39:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:42.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.990 11:39:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:15:42.990 11:39:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:43.249 true 00:15:43.249 11:39:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:43.249 11:39:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:44.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.179 11:39:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:44.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.179 11:39:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:15:44.179 11:39:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:44.435 true 00:15:44.435 11:39:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:44.435 11:39:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.365 11:39:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:45.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.365 11:39:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:15:45.365 11:39:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:45.622 true 00:15:45.622 11:39:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:45.622 11:39:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:46.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.554 11:39:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:46.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.554 11:39:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:15:46.554 11:39:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:46.810 true 00:15:46.810 11:39:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:46.810 11:39:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:47.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.739 11:39:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:47.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.739 11:39:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:47.739 11:39:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:47.996 true 00:15:47.996 11:39:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:47.996 11:39:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.940 11:39:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:48.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.940 11:39:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:48.940 11:39:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:49.212 true 00:15:49.212 11:39:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:49.212 11:39:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.144 11:39:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:50.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.144 11:39:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:50.144 11:39:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:50.144 true 00:15:50.400 11:39:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:50.400 11:39:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.223 11:39:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:51.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.223 11:39:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:51.223 11:39:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:51.479 true 00:15:51.479 11:39:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:51.479 11:39:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:52.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.412 11:39:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:52.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.412 11:39:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:52.412 11:39:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:52.669 true 00:15:52.669 11:39:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:52.669 11:39:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:53.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.601 11:39:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:53.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.601 11:39:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:53.601 11:39:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:53.858 true 00:15:53.858 11:39:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:53.858 11:39:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:54.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.786 11:39:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:54.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.786 11:39:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:54.786 11:39:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:55.044 true 00:15:55.044 11:39:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:55.044 11:39:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:55.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.974 11:39:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:55.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.974 11:39:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:55.974 11:39:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:56.230 true 00:15:56.230 11:39:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:56.230 11:39:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.160 11:39:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:57.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.160 11:39:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:57.160 11:39:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:57.417 true 00:15:57.417 11:39:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:57.417 11:39:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.349 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.349 11:39:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:58.349 11:39:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:58.349 11:39:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:58.606 true 00:15:58.606 11:39:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:58.606 11:39:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.862 11:39:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:58.862 11:39:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:58.862 11:39:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:59.120 true 00:15:59.120 11:39:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:15:59.120 11:39:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:00.492 11:39:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:00.492 11:39:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:16:00.492 11:39:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:16:00.492 true 00:16:00.492 11:39:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:16:00.492 11:39:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:00.750 11:39:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:01.008 11:39:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:16:01.008 11:39:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:01.008 true 00:16:01.265 11:39:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:16:01.265 11:39:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:01.265 11:39:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:01.523 11:39:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:16:01.523 11:39:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:16:01.781 true 00:16:01.781 11:39:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:16:01.781 11:39:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:01.781 11:39:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:02.038 11:39:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:16:02.038 11:39:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:16:02.296 true 00:16:02.296 11:39:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:16:02.296 11:39:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.296 Initializing NVMe Controllers 00:16:02.296 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:02.296 Controller IO queue size 128, less than required. 00:16:02.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:02.296 Controller IO queue size 128, less than required. 00:16:02.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:02.296 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:02.296 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:02.296 Initialization complete. Launching workers. 00:16:02.296 ======================================================== 00:16:02.296 Latency(us) 00:16:02.296 Device Information : IOPS MiB/s Average min max 00:16:02.296 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5156.07 2.52 22078.72 792.59 1125422.68 00:16:02.296 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35639.47 17.40 3591.39 2169.31 283202.83 00:16:02.296 ======================================================== 00:16:02.296 Total : 40795.53 19.92 5927.97 792.59 1125422.68 00:16:02.296 00:16:02.296 11:39:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:02.554 11:39:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:16:02.554 11:39:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:16:02.813 true 00:16:02.813 11:39:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2306926 00:16:02.813 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2306926) - No such process 00:16:02.813 11:39:32 -- target/ns_hotplug_stress.sh@53 -- # wait 2306926 00:16:02.813 11:39:32 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.813 11:39:32 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:03.115 11:39:32 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:16:03.115 11:39:32 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:16:03.115 11:39:32 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:16:03.115 11:39:32 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:03.115 11:39:32 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:16:03.115 null0 00:16:03.387 11:39:32 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:03.387 11:39:32 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:03.387 11:39:32 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:16:03.388 null1 00:16:03.388 11:39:32 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:03.388 11:39:32 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:03.388 11:39:32 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:16:03.644 null2 00:16:03.644 11:39:32 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:03.644 11:39:32 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:03.644 11:39:32 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:16:03.644 null3 00:16:03.644 11:39:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:03.644 11:39:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:03.644 11:39:33 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:16:03.901 null4 00:16:03.901 11:39:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:03.901 11:39:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:03.901 11:39:33 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:16:04.159 null5 00:16:04.159 11:39:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:04.159 11:39:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:04.159 11:39:33 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:16:04.159 null6 00:16:04.159 11:39:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:04.159 11:39:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:04.159 11:39:33 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:16:04.416 null7 00:16:04.416 11:39:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:04.416 11:39:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@66 -- # wait 2313570 2313572 2313575 2313578 2313581 2313584 2313587 2313589 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.417 11:39:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:04.674 11:39:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:04.674 11:39:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:04.674 11:39:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:04.674 11:39:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:04.674 11:39:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:04.674 11:39:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:04.674 11:39:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:04.674 11:39:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:04.932 11:39:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:05.189 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.189 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.189 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.189 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:05.189 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.189 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:05.189 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.190 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.190 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:05.190 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.190 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.190 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:05.190 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.190 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.190 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.190 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:05.190 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.190 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:05.190 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.190 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.190 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:05.190 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.190 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.190 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.448 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:05.810 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.810 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.810 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:05.810 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.810 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.810 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:05.810 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.810 11:39:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.810 11:39:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.810 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:06.066 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.323 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:06.579 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:06.579 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:06.579 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:06.579 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.580 11:39:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:06.837 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:06.837 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:06.837 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:06.837 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:06.837 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:06.837 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:06.837 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:06.837 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:07.093 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.093 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:07.094 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.351 11:39:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:07.609 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:07.609 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:07.609 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:07.609 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.609 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:07.609 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:07.609 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:07.609 11:39:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:07.866 11:39:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:08.124 11:39:37 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:08.124 11:39:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:08.124 11:39:37 -- nvmf/common.sh@116 -- # sync 00:16:08.124 11:39:37 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:08.124 11:39:37 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:08.124 11:39:37 -- nvmf/common.sh@119 -- # set +e 00:16:08.124 11:39:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:08.124 11:39:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:08.124 rmmod nvme_rdma 00:16:08.124 rmmod nvme_fabrics 00:16:08.124 11:39:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:08.124 11:39:37 -- nvmf/common.sh@123 -- # set -e 00:16:08.124 11:39:37 -- nvmf/common.sh@124 -- # return 0 00:16:08.124 11:39:37 -- nvmf/common.sh@477 -- # '[' -n 2306543 ']' 00:16:08.124 11:39:37 -- nvmf/common.sh@478 -- # killprocess 2306543 00:16:08.124 11:39:37 -- common/autotest_common.sh@926 -- # '[' -z 2306543 ']' 00:16:08.124 11:39:37 -- common/autotest_common.sh@930 -- # kill -0 2306543 00:16:08.124 11:39:37 -- common/autotest_common.sh@931 -- # uname 00:16:08.124 11:39:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:08.124 11:39:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2306543 00:16:08.124 11:39:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:08.124 11:39:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:08.124 11:39:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2306543' 00:16:08.124 killing process with pid 2306543 00:16:08.124 11:39:37 -- common/autotest_common.sh@945 -- # kill 2306543 00:16:08.124 11:39:37 -- common/autotest_common.sh@950 -- # wait 2306543 00:16:08.382 11:39:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:08.382 11:39:37 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:08.382 00:16:08.382 real 0m49.602s 00:16:08.382 user 3m15.060s 00:16:08.382 sys 0m15.472s 00:16:08.382 11:39:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.382 11:39:37 -- common/autotest_common.sh@10 -- # set +x 00:16:08.382 ************************************ 00:16:08.382 END TEST nvmf_ns_hotplug_stress 00:16:08.382 ************************************ 00:16:08.639 11:39:37 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:08.639 11:39:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:08.639 11:39:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:08.639 11:39:37 -- common/autotest_common.sh@10 -- # set +x 00:16:08.639 ************************************ 00:16:08.639 START TEST nvmf_connect_stress 00:16:08.639 ************************************ 00:16:08.639 11:39:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:08.639 * Looking for test storage... 00:16:08.639 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:08.639 11:39:37 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.639 11:39:37 -- nvmf/common.sh@7 -- # uname -s 00:16:08.639 11:39:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.639 11:39:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.639 11:39:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.639 11:39:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.640 11:39:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.640 11:39:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.640 11:39:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.640 11:39:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.640 11:39:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.640 11:39:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.640 11:39:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:08.640 11:39:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:08.640 11:39:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.640 11:39:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.640 11:39:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.640 11:39:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:08.640 11:39:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.640 11:39:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.640 11:39:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.640 11:39:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.640 11:39:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.640 11:39:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.640 11:39:37 -- paths/export.sh@5 -- # export PATH 00:16:08.640 11:39:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.640 11:39:37 -- nvmf/common.sh@46 -- # : 0 00:16:08.640 11:39:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:08.640 11:39:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:08.640 11:39:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:08.640 11:39:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.640 11:39:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.640 11:39:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:08.640 11:39:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:08.640 11:39:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:08.640 11:39:37 -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:08.640 11:39:37 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:08.640 11:39:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.640 11:39:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:08.640 11:39:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:08.640 11:39:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:08.640 11:39:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.640 11:39:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.640 11:39:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.640 11:39:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:08.640 11:39:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:08.640 11:39:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:08.640 11:39:37 -- common/autotest_common.sh@10 -- # set +x 00:16:16.743 11:39:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:16.743 11:39:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:16.743 11:39:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:16.743 11:39:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:16.743 11:39:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:16.743 11:39:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:16.743 11:39:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:16.743 11:39:45 -- nvmf/common.sh@294 -- # net_devs=() 00:16:16.743 11:39:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:16.743 11:39:45 -- nvmf/common.sh@295 -- # e810=() 00:16:16.743 11:39:45 -- nvmf/common.sh@295 -- # local -ga e810 00:16:16.743 11:39:45 -- nvmf/common.sh@296 -- # x722=() 00:16:16.743 11:39:45 -- nvmf/common.sh@296 -- # local -ga x722 00:16:16.743 11:39:45 -- nvmf/common.sh@297 -- # mlx=() 00:16:16.743 11:39:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:16.743 11:39:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.743 11:39:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.743 11:39:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.743 11:39:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.743 11:39:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.743 11:39:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.743 11:39:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.743 11:39:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.743 11:39:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.743 11:39:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.743 11:39:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.743 11:39:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:16.743 11:39:45 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:16.743 11:39:45 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:16.743 11:39:45 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:16.743 11:39:45 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:16.743 11:39:45 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:16.743 11:39:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:16.743 11:39:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:16.743 11:39:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:16.743 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:16.743 11:39:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:16.743 11:39:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:16.743 11:39:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:16.743 11:39:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:16.743 11:39:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:16.743 11:39:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:16.743 11:39:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:16.743 11:39:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:16.743 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:16.743 11:39:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:16.743 11:39:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:16.743 11:39:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:16.743 11:39:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:16.743 11:39:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:16.743 11:39:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:16.743 11:39:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:16.743 11:39:45 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:16.743 11:39:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:16.743 11:39:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.743 11:39:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:16.743 11:39:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.743 11:39:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:16.743 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:16.743 11:39:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.743 11:39:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:16.743 11:39:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.743 11:39:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:16.743 11:39:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.743 11:39:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:16.743 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:16.743 11:39:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.743 11:39:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:16.743 11:39:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:16.743 11:39:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:16.743 11:39:45 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:16.743 11:39:45 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:16.743 11:39:45 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:16.743 11:39:45 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:16.743 11:39:45 -- nvmf/common.sh@57 -- # uname 00:16:16.743 11:39:45 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:16.743 11:39:45 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:16.743 11:39:45 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:16.743 11:39:45 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:16.743 11:39:45 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:16.743 11:39:45 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:16.743 11:39:45 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:16.743 11:39:45 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:16.743 11:39:46 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:16.743 11:39:46 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:16.743 11:39:46 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:16.743 11:39:46 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:16.743 11:39:46 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:16.743 11:39:46 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:16.743 11:39:46 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:16.743 11:39:46 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:16.743 11:39:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:16.743 11:39:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:16.743 11:39:46 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:16.743 11:39:46 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:16.743 11:39:46 -- nvmf/common.sh@104 -- # continue 2 00:16:16.743 11:39:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:16.743 11:39:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:16.743 11:39:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:16.743 11:39:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:16.743 11:39:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:16.743 11:39:46 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:16.743 11:39:46 -- nvmf/common.sh@104 -- # continue 2 00:16:16.743 11:39:46 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:16.743 11:39:46 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:16.743 11:39:46 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:16.743 11:39:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:16.743 11:39:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:16.743 11:39:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:16.743 11:39:46 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:16.743 11:39:46 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:16.743 11:39:46 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:16.743 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:16.743 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:16.743 altname enp217s0f0np0 00:16:16.743 altname ens818f0np0 00:16:16.743 inet 192.168.100.8/24 scope global mlx_0_0 00:16:16.743 valid_lft forever preferred_lft forever 00:16:16.743 11:39:46 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:16.743 11:39:46 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:16.743 11:39:46 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:16.743 11:39:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:16.743 11:39:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:16.743 11:39:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:16.743 11:39:46 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:16.743 11:39:46 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:16.743 11:39:46 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:16.743 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:16.743 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:16.743 altname enp217s0f1np1 00:16:16.743 altname ens818f1np1 00:16:16.743 inet 192.168.100.9/24 scope global mlx_0_1 00:16:16.743 valid_lft forever preferred_lft forever 00:16:16.743 11:39:46 -- nvmf/common.sh@410 -- # return 0 00:16:16.743 11:39:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:16.743 11:39:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:16.743 11:39:46 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:16.743 11:39:46 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:16.743 11:39:46 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:16.743 11:39:46 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:16.743 11:39:46 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:16.743 11:39:46 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:16.743 11:39:46 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:16.743 11:39:46 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:16.743 11:39:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:16.743 11:39:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:16.743 11:39:46 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:16.743 11:39:46 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:16.743 11:39:46 -- nvmf/common.sh@104 -- # continue 2 00:16:16.743 11:39:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:16.743 11:39:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:16.743 11:39:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:16.743 11:39:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:16.743 11:39:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:16.743 11:39:46 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:16.743 11:39:46 -- nvmf/common.sh@104 -- # continue 2 00:16:16.743 11:39:46 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:16.743 11:39:46 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:16.743 11:39:46 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:16.743 11:39:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:16.743 11:39:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:16.743 11:39:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:16.743 11:39:46 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:16.743 11:39:46 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:16.743 11:39:46 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:16.743 11:39:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:16.743 11:39:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:16.743 11:39:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:16.743 11:39:46 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:16.743 192.168.100.9' 00:16:16.743 11:39:46 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:16.743 192.168.100.9' 00:16:16.743 11:39:46 -- nvmf/common.sh@445 -- # head -n 1 00:16:16.743 11:39:46 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:16.743 11:39:46 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:16.743 192.168.100.9' 00:16:16.743 11:39:46 -- nvmf/common.sh@446 -- # tail -n +2 00:16:16.743 11:39:46 -- nvmf/common.sh@446 -- # head -n 1 00:16:17.000 11:39:46 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:17.000 11:39:46 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:17.000 11:39:46 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:17.000 11:39:46 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:17.000 11:39:46 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:17.000 11:39:46 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:17.000 11:39:46 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:17.000 11:39:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:17.000 11:39:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:17.000 11:39:46 -- common/autotest_common.sh@10 -- # set +x 00:16:17.000 11:39:46 -- nvmf/common.sh@469 -- # nvmfpid=2318491 00:16:17.000 11:39:46 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:17.000 11:39:46 -- nvmf/common.sh@470 -- # waitforlisten 2318491 00:16:17.000 11:39:46 -- common/autotest_common.sh@819 -- # '[' -z 2318491 ']' 00:16:17.000 11:39:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.000 11:39:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:17.000 11:39:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.000 11:39:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:17.000 11:39:46 -- common/autotest_common.sh@10 -- # set +x 00:16:17.000 [2024-07-21 11:39:46.248808] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:17.000 [2024-07-21 11:39:46.248858] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.000 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.000 [2024-07-21 11:39:46.333257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:17.000 [2024-07-21 11:39:46.370474] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:17.000 [2024-07-21 11:39:46.370585] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.000 [2024-07-21 11:39:46.370595] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.000 [2024-07-21 11:39:46.370605] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.000 [2024-07-21 11:39:46.370707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.000 [2024-07-21 11:39:46.370737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.000 [2024-07-21 11:39:46.370739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.930 11:39:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:17.930 11:39:47 -- common/autotest_common.sh@852 -- # return 0 00:16:17.930 11:39:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:17.930 11:39:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:17.930 11:39:47 -- common/autotest_common.sh@10 -- # set +x 00:16:17.930 11:39:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.930 11:39:47 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:17.930 11:39:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.930 11:39:47 -- common/autotest_common.sh@10 -- # set +x 00:16:17.930 [2024-07-21 11:39:47.116630] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11bbcd0/0x11c01c0) succeed. 00:16:17.930 [2024-07-21 11:39:47.126839] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11bd220/0x1201850) succeed. 00:16:17.930 11:39:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.930 11:39:47 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:17.930 11:39:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.930 11:39:47 -- common/autotest_common.sh@10 -- # set +x 00:16:17.930 11:39:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.930 11:39:47 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:17.930 11:39:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.930 11:39:47 -- common/autotest_common.sh@10 -- # set +x 00:16:17.930 [2024-07-21 11:39:47.251288] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:17.930 11:39:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.930 11:39:47 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:17.930 11:39:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.930 11:39:47 -- common/autotest_common.sh@10 -- # set +x 00:16:17.930 NULL1 00:16:17.930 11:39:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.930 11:39:47 -- target/connect_stress.sh@21 -- # PERF_PID=2318606 00:16:17.930 11:39:47 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:17.930 11:39:47 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:17.930 11:39:47 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # seq 1 20 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.930 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.930 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.930 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.930 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.930 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:17.930 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.930 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.930 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.930 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.930 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.930 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.930 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.930 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.930 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.930 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.930 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:17.930 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:17.930 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:18.187 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:18.187 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:18.187 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:18.187 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:18.187 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:18.187 11:39:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:18.187 11:39:47 -- target/connect_stress.sh@28 -- # cat 00:16:18.187 11:39:47 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:18.187 11:39:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.187 11:39:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.187 11:39:47 -- common/autotest_common.sh@10 -- # set +x 00:16:18.444 11:39:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.444 11:39:47 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:18.444 11:39:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.444 11:39:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.444 11:39:47 -- common/autotest_common.sh@10 -- # set +x 00:16:18.700 11:39:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.700 11:39:48 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:18.700 11:39:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.700 11:39:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.700 11:39:48 -- common/autotest_common.sh@10 -- # set +x 00:16:18.957 11:39:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.957 11:39:48 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:18.957 11:39:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.957 11:39:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.957 11:39:48 -- common/autotest_common.sh@10 -- # set +x 00:16:19.520 11:39:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.520 11:39:48 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:19.520 11:39:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.520 11:39:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.520 11:39:48 -- common/autotest_common.sh@10 -- # set +x 00:16:19.777 11:39:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.777 11:39:48 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:19.777 11:39:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.777 11:39:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.777 11:39:48 -- common/autotest_common.sh@10 -- # set +x 00:16:20.033 11:39:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:20.033 11:39:49 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:20.033 11:39:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.033 11:39:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:20.033 11:39:49 -- common/autotest_common.sh@10 -- # set +x 00:16:20.290 11:39:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:20.290 11:39:49 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:20.290 11:39:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.290 11:39:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:20.290 11:39:49 -- common/autotest_common.sh@10 -- # set +x 00:16:20.548 11:39:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:20.548 11:39:49 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:20.548 11:39:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.548 11:39:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:20.548 11:39:49 -- common/autotest_common.sh@10 -- # set +x 00:16:21.121 11:39:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.121 11:39:50 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:21.121 11:39:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.121 11:39:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.121 11:39:50 -- common/autotest_common.sh@10 -- # set +x 00:16:21.390 11:39:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.390 11:39:50 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:21.390 11:39:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.390 11:39:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.390 11:39:50 -- common/autotest_common.sh@10 -- # set +x 00:16:21.648 11:39:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.648 11:39:50 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:21.648 11:39:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.648 11:39:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.648 11:39:50 -- common/autotest_common.sh@10 -- # set +x 00:16:21.905 11:39:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.905 11:39:51 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:21.905 11:39:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.905 11:39:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.905 11:39:51 -- common/autotest_common.sh@10 -- # set +x 00:16:22.468 11:39:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:22.468 11:39:51 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:22.468 11:39:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.468 11:39:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:22.468 11:39:51 -- common/autotest_common.sh@10 -- # set +x 00:16:22.724 11:39:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:22.724 11:39:51 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:22.724 11:39:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.724 11:39:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:22.724 11:39:51 -- common/autotest_common.sh@10 -- # set +x 00:16:22.981 11:39:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:22.981 11:39:52 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:22.981 11:39:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.981 11:39:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:22.981 11:39:52 -- common/autotest_common.sh@10 -- # set +x 00:16:23.238 11:39:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:23.238 11:39:52 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:23.238 11:39:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.238 11:39:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.238 11:39:52 -- common/autotest_common.sh@10 -- # set +x 00:16:23.494 11:39:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:23.494 11:39:52 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:23.494 11:39:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.494 11:39:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.494 11:39:52 -- common/autotest_common.sh@10 -- # set +x 00:16:24.057 11:39:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.057 11:39:53 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:24.057 11:39:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.057 11:39:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.057 11:39:53 -- common/autotest_common.sh@10 -- # set +x 00:16:24.314 11:39:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.314 11:39:53 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:24.314 11:39:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.314 11:39:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.314 11:39:53 -- common/autotest_common.sh@10 -- # set +x 00:16:24.569 11:39:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.569 11:39:53 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:24.569 11:39:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.569 11:39:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.569 11:39:53 -- common/autotest_common.sh@10 -- # set +x 00:16:24.826 11:39:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.826 11:39:54 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:24.826 11:39:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.826 11:39:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.826 11:39:54 -- common/autotest_common.sh@10 -- # set +x 00:16:25.390 11:39:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:25.390 11:39:54 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:25.390 11:39:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.390 11:39:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:25.390 11:39:54 -- common/autotest_common.sh@10 -- # set +x 00:16:25.647 11:39:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:25.647 11:39:54 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:25.647 11:39:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.647 11:39:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:25.647 11:39:54 -- common/autotest_common.sh@10 -- # set +x 00:16:25.904 11:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:25.904 11:39:55 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:25.904 11:39:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.904 11:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:25.904 11:39:55 -- common/autotest_common.sh@10 -- # set +x 00:16:26.161 11:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.161 11:39:55 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:26.161 11:39:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:26.161 11:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.161 11:39:55 -- common/autotest_common.sh@10 -- # set +x 00:16:26.417 11:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.417 11:39:55 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:26.417 11:39:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:26.417 11:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.417 11:39:55 -- common/autotest_common.sh@10 -- # set +x 00:16:26.984 11:39:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.984 11:39:56 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:26.984 11:39:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:26.984 11:39:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.984 11:39:56 -- common/autotest_common.sh@10 -- # set +x 00:16:27.242 11:39:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.242 11:39:56 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:27.242 11:39:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:27.242 11:39:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.242 11:39:56 -- common/autotest_common.sh@10 -- # set +x 00:16:27.499 11:39:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.499 11:39:56 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:27.499 11:39:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:27.499 11:39:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.499 11:39:56 -- common/autotest_common.sh@10 -- # set +x 00:16:27.757 11:39:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.757 11:39:57 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:27.757 11:39:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:27.757 11:39:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.757 11:39:57 -- common/autotest_common.sh@10 -- # set +x 00:16:28.323 11:39:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.323 11:39:57 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:28.323 11:39:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:28.323 11:39:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.323 11:39:57 -- common/autotest_common.sh@10 -- # set +x 00:16:28.323 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:28.580 11:39:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.580 11:39:57 -- target/connect_stress.sh@34 -- # kill -0 2318606 00:16:28.580 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2318606) - No such process 00:16:28.580 11:39:57 -- target/connect_stress.sh@38 -- # wait 2318606 00:16:28.580 11:39:57 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:28.580 11:39:57 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:28.580 11:39:57 -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:28.580 11:39:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:28.580 11:39:57 -- nvmf/common.sh@116 -- # sync 00:16:28.580 11:39:57 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:28.580 11:39:57 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:28.580 11:39:57 -- nvmf/common.sh@119 -- # set +e 00:16:28.580 11:39:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:28.580 11:39:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:28.580 rmmod nvme_rdma 00:16:28.580 rmmod nvme_fabrics 00:16:28.581 11:39:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:28.581 11:39:57 -- nvmf/common.sh@123 -- # set -e 00:16:28.581 11:39:57 -- nvmf/common.sh@124 -- # return 0 00:16:28.581 11:39:57 -- nvmf/common.sh@477 -- # '[' -n 2318491 ']' 00:16:28.581 11:39:57 -- nvmf/common.sh@478 -- # killprocess 2318491 00:16:28.581 11:39:57 -- common/autotest_common.sh@926 -- # '[' -z 2318491 ']' 00:16:28.581 11:39:57 -- common/autotest_common.sh@930 -- # kill -0 2318491 00:16:28.581 11:39:57 -- common/autotest_common.sh@931 -- # uname 00:16:28.581 11:39:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:28.581 11:39:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2318491 00:16:28.581 11:39:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:28.581 11:39:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:28.581 11:39:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2318491' 00:16:28.581 killing process with pid 2318491 00:16:28.581 11:39:57 -- common/autotest_common.sh@945 -- # kill 2318491 00:16:28.581 11:39:57 -- common/autotest_common.sh@950 -- # wait 2318491 00:16:28.838 11:39:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:28.838 11:39:58 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:28.838 00:16:28.838 real 0m20.314s 00:16:28.838 user 0m42.806s 00:16:28.838 sys 0m8.917s 00:16:28.838 11:39:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:28.838 11:39:58 -- common/autotest_common.sh@10 -- # set +x 00:16:28.838 ************************************ 00:16:28.838 END TEST nvmf_connect_stress 00:16:28.838 ************************************ 00:16:28.838 11:39:58 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:28.838 11:39:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:28.838 11:39:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:28.838 11:39:58 -- common/autotest_common.sh@10 -- # set +x 00:16:28.838 ************************************ 00:16:28.838 START TEST nvmf_fused_ordering 00:16:28.838 ************************************ 00:16:28.838 11:39:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:29.096 * Looking for test storage... 00:16:29.096 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:29.096 11:39:58 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:29.096 11:39:58 -- nvmf/common.sh@7 -- # uname -s 00:16:29.096 11:39:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.096 11:39:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.096 11:39:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.096 11:39:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.096 11:39:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.096 11:39:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.096 11:39:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.096 11:39:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.096 11:39:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.096 11:39:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.096 11:39:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:29.096 11:39:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:29.096 11:39:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.096 11:39:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.096 11:39:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:29.096 11:39:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:29.096 11:39:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.096 11:39:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.096 11:39:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.096 11:39:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.097 11:39:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.097 11:39:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.097 11:39:58 -- paths/export.sh@5 -- # export PATH 00:16:29.097 11:39:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.097 11:39:58 -- nvmf/common.sh@46 -- # : 0 00:16:29.097 11:39:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:29.097 11:39:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:29.097 11:39:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:29.097 11:39:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.097 11:39:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.097 11:39:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:29.097 11:39:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:29.097 11:39:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:29.097 11:39:58 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:29.097 11:39:58 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:29.097 11:39:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.097 11:39:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:29.097 11:39:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:29.097 11:39:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:29.097 11:39:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.097 11:39:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.097 11:39:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.097 11:39:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:29.097 11:39:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:29.097 11:39:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:29.097 11:39:58 -- common/autotest_common.sh@10 -- # set +x 00:16:37.204 11:40:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:37.204 11:40:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:37.204 11:40:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:37.204 11:40:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:37.204 11:40:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:37.204 11:40:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:37.204 11:40:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:37.204 11:40:06 -- nvmf/common.sh@294 -- # net_devs=() 00:16:37.204 11:40:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:37.204 11:40:06 -- nvmf/common.sh@295 -- # e810=() 00:16:37.204 11:40:06 -- nvmf/common.sh@295 -- # local -ga e810 00:16:37.204 11:40:06 -- nvmf/common.sh@296 -- # x722=() 00:16:37.204 11:40:06 -- nvmf/common.sh@296 -- # local -ga x722 00:16:37.204 11:40:06 -- nvmf/common.sh@297 -- # mlx=() 00:16:37.204 11:40:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:37.204 11:40:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:37.204 11:40:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:37.204 11:40:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:37.204 11:40:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:37.204 11:40:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:37.204 11:40:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:37.204 11:40:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:37.204 11:40:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:37.204 11:40:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:37.204 11:40:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:37.204 11:40:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:37.204 11:40:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:37.204 11:40:06 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:37.204 11:40:06 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:37.204 11:40:06 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:37.204 11:40:06 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:37.205 11:40:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:37.205 11:40:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:37.205 11:40:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:37.205 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:37.205 11:40:06 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:37.205 11:40:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:37.205 11:40:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:37.205 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:37.205 11:40:06 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:37.205 11:40:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:37.205 11:40:06 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:37.205 11:40:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.205 11:40:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:37.205 11:40:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.205 11:40:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:37.205 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:37.205 11:40:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.205 11:40:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:37.205 11:40:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.205 11:40:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:37.205 11:40:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.205 11:40:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:37.205 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:37.205 11:40:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.205 11:40:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:37.205 11:40:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:37.205 11:40:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:37.205 11:40:06 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:37.205 11:40:06 -- nvmf/common.sh@57 -- # uname 00:16:37.205 11:40:06 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:37.205 11:40:06 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:37.205 11:40:06 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:37.205 11:40:06 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:37.205 11:40:06 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:37.205 11:40:06 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:37.205 11:40:06 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:37.205 11:40:06 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:37.205 11:40:06 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:37.205 11:40:06 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:37.205 11:40:06 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:37.205 11:40:06 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:37.205 11:40:06 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:37.205 11:40:06 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:37.205 11:40:06 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:37.205 11:40:06 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:37.205 11:40:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:37.205 11:40:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.205 11:40:06 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:37.205 11:40:06 -- nvmf/common.sh@104 -- # continue 2 00:16:37.205 11:40:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:37.205 11:40:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.205 11:40:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.205 11:40:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:37.205 11:40:06 -- nvmf/common.sh@104 -- # continue 2 00:16:37.205 11:40:06 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:37.205 11:40:06 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:37.205 11:40:06 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:37.205 11:40:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:37.205 11:40:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:37.205 11:40:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:37.205 11:40:06 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:37.205 11:40:06 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:37.205 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:37.205 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:37.205 altname enp217s0f0np0 00:16:37.205 altname ens818f0np0 00:16:37.205 inet 192.168.100.8/24 scope global mlx_0_0 00:16:37.205 valid_lft forever preferred_lft forever 00:16:37.205 11:40:06 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:37.205 11:40:06 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:37.205 11:40:06 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:37.205 11:40:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:37.205 11:40:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:37.205 11:40:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:37.205 11:40:06 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:37.205 11:40:06 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:37.205 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:37.205 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:37.205 altname enp217s0f1np1 00:16:37.205 altname ens818f1np1 00:16:37.205 inet 192.168.100.9/24 scope global mlx_0_1 00:16:37.205 valid_lft forever preferred_lft forever 00:16:37.205 11:40:06 -- nvmf/common.sh@410 -- # return 0 00:16:37.205 11:40:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:37.205 11:40:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:37.205 11:40:06 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:37.205 11:40:06 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:37.205 11:40:06 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:37.205 11:40:06 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:37.205 11:40:06 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:37.205 11:40:06 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:37.205 11:40:06 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:37.205 11:40:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:37.205 11:40:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.205 11:40:06 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:37.205 11:40:06 -- nvmf/common.sh@104 -- # continue 2 00:16:37.205 11:40:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:37.205 11:40:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.205 11:40:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.205 11:40:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:37.205 11:40:06 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:37.205 11:40:06 -- nvmf/common.sh@104 -- # continue 2 00:16:37.205 11:40:06 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:37.205 11:40:06 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:37.205 11:40:06 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:37.205 11:40:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:37.205 11:40:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:37.205 11:40:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:37.205 11:40:06 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:37.205 11:40:06 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:37.205 11:40:06 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:37.205 11:40:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:37.205 11:40:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:37.205 11:40:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:37.205 11:40:06 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:37.205 192.168.100.9' 00:16:37.205 11:40:06 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:37.205 192.168.100.9' 00:16:37.205 11:40:06 -- nvmf/common.sh@445 -- # head -n 1 00:16:37.205 11:40:06 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:37.205 11:40:06 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:37.205 192.168.100.9' 00:16:37.205 11:40:06 -- nvmf/common.sh@446 -- # tail -n +2 00:16:37.205 11:40:06 -- nvmf/common.sh@446 -- # head -n 1 00:16:37.205 11:40:06 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:37.205 11:40:06 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:37.205 11:40:06 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:37.205 11:40:06 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:37.205 11:40:06 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:37.205 11:40:06 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:37.205 11:40:06 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:37.205 11:40:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:37.205 11:40:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:37.205 11:40:06 -- common/autotest_common.sh@10 -- # set +x 00:16:37.205 11:40:06 -- nvmf/common.sh@469 -- # nvmfpid=2324620 00:16:37.205 11:40:06 -- nvmf/common.sh@470 -- # waitforlisten 2324620 00:16:37.205 11:40:06 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:37.205 11:40:06 -- common/autotest_common.sh@819 -- # '[' -z 2324620 ']' 00:16:37.205 11:40:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.205 11:40:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:37.205 11:40:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.205 11:40:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:37.205 11:40:06 -- common/autotest_common.sh@10 -- # set +x 00:16:37.462 [2024-07-21 11:40:06.663765] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:37.462 [2024-07-21 11:40:06.663825] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.462 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.462 [2024-07-21 11:40:06.751750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.462 [2024-07-21 11:40:06.789701] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:37.462 [2024-07-21 11:40:06.789808] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.462 [2024-07-21 11:40:06.789822] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.462 [2024-07-21 11:40:06.789831] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.462 [2024-07-21 11:40:06.789857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.392 11:40:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:38.392 11:40:07 -- common/autotest_common.sh@852 -- # return 0 00:16:38.392 11:40:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:38.392 11:40:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:38.392 11:40:07 -- common/autotest_common.sh@10 -- # set +x 00:16:38.392 11:40:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.392 11:40:07 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:38.392 11:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:38.392 11:40:07 -- common/autotest_common.sh@10 -- # set +x 00:16:38.392 [2024-07-21 11:40:07.521616] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14d3620/0x14d7b10) succeed. 00:16:38.392 [2024-07-21 11:40:07.530618] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14d4b20/0x15191a0) succeed. 00:16:38.392 11:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:38.392 11:40:07 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:38.392 11:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:38.392 11:40:07 -- common/autotest_common.sh@10 -- # set +x 00:16:38.392 11:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:38.392 11:40:07 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:38.392 11:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:38.392 11:40:07 -- common/autotest_common.sh@10 -- # set +x 00:16:38.392 [2024-07-21 11:40:07.595211] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:38.392 11:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:38.392 11:40:07 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:38.392 11:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:38.392 11:40:07 -- common/autotest_common.sh@10 -- # set +x 00:16:38.392 NULL1 00:16:38.392 11:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:38.392 11:40:07 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:38.392 11:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:38.392 11:40:07 -- common/autotest_common.sh@10 -- # set +x 00:16:38.392 11:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:38.393 11:40:07 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:38.393 11:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:38.393 11:40:07 -- common/autotest_common.sh@10 -- # set +x 00:16:38.393 11:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:38.393 11:40:07 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:38.393 [2024-07-21 11:40:07.649788] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:38.393 [2024-07-21 11:40:07.649831] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2324720 ] 00:16:38.393 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.650 Attached to nqn.2016-06.io.spdk:cnode1 00:16:38.650 Namespace ID: 1 size: 1GB 00:16:38.650 fused_ordering(0) 00:16:38.650 fused_ordering(1) 00:16:38.650 fused_ordering(2) 00:16:38.650 fused_ordering(3) 00:16:38.650 fused_ordering(4) 00:16:38.650 fused_ordering(5) 00:16:38.650 fused_ordering(6) 00:16:38.650 fused_ordering(7) 00:16:38.650 fused_ordering(8) 00:16:38.650 fused_ordering(9) 00:16:38.650 fused_ordering(10) 00:16:38.650 fused_ordering(11) 00:16:38.650 fused_ordering(12) 00:16:38.650 fused_ordering(13) 00:16:38.650 fused_ordering(14) 00:16:38.650 fused_ordering(15) 00:16:38.650 fused_ordering(16) 00:16:38.650 fused_ordering(17) 00:16:38.650 fused_ordering(18) 00:16:38.650 fused_ordering(19) 00:16:38.650 fused_ordering(20) 00:16:38.650 fused_ordering(21) 00:16:38.650 fused_ordering(22) 00:16:38.650 fused_ordering(23) 00:16:38.650 fused_ordering(24) 00:16:38.650 fused_ordering(25) 00:16:38.650 fused_ordering(26) 00:16:38.650 fused_ordering(27) 00:16:38.650 fused_ordering(28) 00:16:38.650 fused_ordering(29) 00:16:38.650 fused_ordering(30) 00:16:38.650 fused_ordering(31) 00:16:38.650 fused_ordering(32) 00:16:38.650 fused_ordering(33) 00:16:38.650 fused_ordering(34) 00:16:38.650 fused_ordering(35) 00:16:38.650 fused_ordering(36) 00:16:38.650 fused_ordering(37) 00:16:38.650 fused_ordering(38) 00:16:38.650 fused_ordering(39) 00:16:38.650 fused_ordering(40) 00:16:38.650 fused_ordering(41) 00:16:38.650 fused_ordering(42) 00:16:38.650 fused_ordering(43) 00:16:38.650 fused_ordering(44) 00:16:38.650 fused_ordering(45) 00:16:38.650 fused_ordering(46) 00:16:38.650 fused_ordering(47) 00:16:38.650 fused_ordering(48) 00:16:38.650 fused_ordering(49) 00:16:38.650 fused_ordering(50) 00:16:38.650 fused_ordering(51) 00:16:38.650 fused_ordering(52) 00:16:38.650 fused_ordering(53) 00:16:38.650 fused_ordering(54) 00:16:38.650 fused_ordering(55) 00:16:38.650 fused_ordering(56) 00:16:38.650 fused_ordering(57) 00:16:38.650 fused_ordering(58) 00:16:38.650 fused_ordering(59) 00:16:38.650 fused_ordering(60) 00:16:38.650 fused_ordering(61) 00:16:38.650 fused_ordering(62) 00:16:38.650 fused_ordering(63) 00:16:38.650 fused_ordering(64) 00:16:38.650 fused_ordering(65) 00:16:38.650 fused_ordering(66) 00:16:38.650 fused_ordering(67) 00:16:38.650 fused_ordering(68) 00:16:38.650 fused_ordering(69) 00:16:38.650 fused_ordering(70) 00:16:38.650 fused_ordering(71) 00:16:38.650 fused_ordering(72) 00:16:38.650 fused_ordering(73) 00:16:38.650 fused_ordering(74) 00:16:38.650 fused_ordering(75) 00:16:38.650 fused_ordering(76) 00:16:38.650 fused_ordering(77) 00:16:38.650 fused_ordering(78) 00:16:38.650 fused_ordering(79) 00:16:38.650 fused_ordering(80) 00:16:38.650 fused_ordering(81) 00:16:38.650 fused_ordering(82) 00:16:38.650 fused_ordering(83) 00:16:38.650 fused_ordering(84) 00:16:38.650 fused_ordering(85) 00:16:38.650 fused_ordering(86) 00:16:38.650 fused_ordering(87) 00:16:38.650 fused_ordering(88) 00:16:38.650 fused_ordering(89) 00:16:38.650 fused_ordering(90) 00:16:38.650 fused_ordering(91) 00:16:38.650 fused_ordering(92) 00:16:38.650 fused_ordering(93) 00:16:38.650 fused_ordering(94) 00:16:38.650 fused_ordering(95) 00:16:38.650 fused_ordering(96) 00:16:38.650 fused_ordering(97) 00:16:38.650 fused_ordering(98) 00:16:38.650 fused_ordering(99) 00:16:38.650 fused_ordering(100) 00:16:38.650 fused_ordering(101) 00:16:38.650 fused_ordering(102) 00:16:38.650 fused_ordering(103) 00:16:38.650 fused_ordering(104) 00:16:38.650 fused_ordering(105) 00:16:38.650 fused_ordering(106) 00:16:38.650 fused_ordering(107) 00:16:38.650 fused_ordering(108) 00:16:38.650 fused_ordering(109) 00:16:38.650 fused_ordering(110) 00:16:38.650 fused_ordering(111) 00:16:38.650 fused_ordering(112) 00:16:38.650 fused_ordering(113) 00:16:38.650 fused_ordering(114) 00:16:38.650 fused_ordering(115) 00:16:38.650 fused_ordering(116) 00:16:38.650 fused_ordering(117) 00:16:38.650 fused_ordering(118) 00:16:38.650 fused_ordering(119) 00:16:38.650 fused_ordering(120) 00:16:38.650 fused_ordering(121) 00:16:38.650 fused_ordering(122) 00:16:38.650 fused_ordering(123) 00:16:38.650 fused_ordering(124) 00:16:38.650 fused_ordering(125) 00:16:38.650 fused_ordering(126) 00:16:38.650 fused_ordering(127) 00:16:38.650 fused_ordering(128) 00:16:38.650 fused_ordering(129) 00:16:38.650 fused_ordering(130) 00:16:38.650 fused_ordering(131) 00:16:38.650 fused_ordering(132) 00:16:38.650 fused_ordering(133) 00:16:38.650 fused_ordering(134) 00:16:38.650 fused_ordering(135) 00:16:38.650 fused_ordering(136) 00:16:38.650 fused_ordering(137) 00:16:38.650 fused_ordering(138) 00:16:38.650 fused_ordering(139) 00:16:38.650 fused_ordering(140) 00:16:38.650 fused_ordering(141) 00:16:38.650 fused_ordering(142) 00:16:38.650 fused_ordering(143) 00:16:38.650 fused_ordering(144) 00:16:38.650 fused_ordering(145) 00:16:38.650 fused_ordering(146) 00:16:38.650 fused_ordering(147) 00:16:38.650 fused_ordering(148) 00:16:38.650 fused_ordering(149) 00:16:38.650 fused_ordering(150) 00:16:38.650 fused_ordering(151) 00:16:38.650 fused_ordering(152) 00:16:38.650 fused_ordering(153) 00:16:38.650 fused_ordering(154) 00:16:38.650 fused_ordering(155) 00:16:38.650 fused_ordering(156) 00:16:38.650 fused_ordering(157) 00:16:38.650 fused_ordering(158) 00:16:38.650 fused_ordering(159) 00:16:38.650 fused_ordering(160) 00:16:38.650 fused_ordering(161) 00:16:38.650 fused_ordering(162) 00:16:38.650 fused_ordering(163) 00:16:38.650 fused_ordering(164) 00:16:38.650 fused_ordering(165) 00:16:38.650 fused_ordering(166) 00:16:38.650 fused_ordering(167) 00:16:38.650 fused_ordering(168) 00:16:38.650 fused_ordering(169) 00:16:38.650 fused_ordering(170) 00:16:38.650 fused_ordering(171) 00:16:38.650 fused_ordering(172) 00:16:38.650 fused_ordering(173) 00:16:38.650 fused_ordering(174) 00:16:38.650 fused_ordering(175) 00:16:38.650 fused_ordering(176) 00:16:38.650 fused_ordering(177) 00:16:38.650 fused_ordering(178) 00:16:38.650 fused_ordering(179) 00:16:38.650 fused_ordering(180) 00:16:38.650 fused_ordering(181) 00:16:38.650 fused_ordering(182) 00:16:38.650 fused_ordering(183) 00:16:38.650 fused_ordering(184) 00:16:38.650 fused_ordering(185) 00:16:38.650 fused_ordering(186) 00:16:38.650 fused_ordering(187) 00:16:38.650 fused_ordering(188) 00:16:38.650 fused_ordering(189) 00:16:38.650 fused_ordering(190) 00:16:38.650 fused_ordering(191) 00:16:38.650 fused_ordering(192) 00:16:38.650 fused_ordering(193) 00:16:38.651 fused_ordering(194) 00:16:38.651 fused_ordering(195) 00:16:38.651 fused_ordering(196) 00:16:38.651 fused_ordering(197) 00:16:38.651 fused_ordering(198) 00:16:38.651 fused_ordering(199) 00:16:38.651 fused_ordering(200) 00:16:38.651 fused_ordering(201) 00:16:38.651 fused_ordering(202) 00:16:38.651 fused_ordering(203) 00:16:38.651 fused_ordering(204) 00:16:38.651 fused_ordering(205) 00:16:38.651 fused_ordering(206) 00:16:38.651 fused_ordering(207) 00:16:38.651 fused_ordering(208) 00:16:38.651 fused_ordering(209) 00:16:38.651 fused_ordering(210) 00:16:38.651 fused_ordering(211) 00:16:38.651 fused_ordering(212) 00:16:38.651 fused_ordering(213) 00:16:38.651 fused_ordering(214) 00:16:38.651 fused_ordering(215) 00:16:38.651 fused_ordering(216) 00:16:38.651 fused_ordering(217) 00:16:38.651 fused_ordering(218) 00:16:38.651 fused_ordering(219) 00:16:38.651 fused_ordering(220) 00:16:38.651 fused_ordering(221) 00:16:38.651 fused_ordering(222) 00:16:38.651 fused_ordering(223) 00:16:38.651 fused_ordering(224) 00:16:38.651 fused_ordering(225) 00:16:38.651 fused_ordering(226) 00:16:38.651 fused_ordering(227) 00:16:38.651 fused_ordering(228) 00:16:38.651 fused_ordering(229) 00:16:38.651 fused_ordering(230) 00:16:38.651 fused_ordering(231) 00:16:38.651 fused_ordering(232) 00:16:38.651 fused_ordering(233) 00:16:38.651 fused_ordering(234) 00:16:38.651 fused_ordering(235) 00:16:38.651 fused_ordering(236) 00:16:38.651 fused_ordering(237) 00:16:38.651 fused_ordering(238) 00:16:38.651 fused_ordering(239) 00:16:38.651 fused_ordering(240) 00:16:38.651 fused_ordering(241) 00:16:38.651 fused_ordering(242) 00:16:38.651 fused_ordering(243) 00:16:38.651 fused_ordering(244) 00:16:38.651 fused_ordering(245) 00:16:38.651 fused_ordering(246) 00:16:38.651 fused_ordering(247) 00:16:38.651 fused_ordering(248) 00:16:38.651 fused_ordering(249) 00:16:38.651 fused_ordering(250) 00:16:38.651 fused_ordering(251) 00:16:38.651 fused_ordering(252) 00:16:38.651 fused_ordering(253) 00:16:38.651 fused_ordering(254) 00:16:38.651 fused_ordering(255) 00:16:38.651 fused_ordering(256) 00:16:38.651 fused_ordering(257) 00:16:38.651 fused_ordering(258) 00:16:38.651 fused_ordering(259) 00:16:38.651 fused_ordering(260) 00:16:38.651 fused_ordering(261) 00:16:38.651 fused_ordering(262) 00:16:38.651 fused_ordering(263) 00:16:38.651 fused_ordering(264) 00:16:38.651 fused_ordering(265) 00:16:38.651 fused_ordering(266) 00:16:38.651 fused_ordering(267) 00:16:38.651 fused_ordering(268) 00:16:38.651 fused_ordering(269) 00:16:38.651 fused_ordering(270) 00:16:38.651 fused_ordering(271) 00:16:38.651 fused_ordering(272) 00:16:38.651 fused_ordering(273) 00:16:38.651 fused_ordering(274) 00:16:38.651 fused_ordering(275) 00:16:38.651 fused_ordering(276) 00:16:38.651 fused_ordering(277) 00:16:38.651 fused_ordering(278) 00:16:38.651 fused_ordering(279) 00:16:38.651 fused_ordering(280) 00:16:38.651 fused_ordering(281) 00:16:38.651 fused_ordering(282) 00:16:38.651 fused_ordering(283) 00:16:38.651 fused_ordering(284) 00:16:38.651 fused_ordering(285) 00:16:38.651 fused_ordering(286) 00:16:38.651 fused_ordering(287) 00:16:38.651 fused_ordering(288) 00:16:38.651 fused_ordering(289) 00:16:38.651 fused_ordering(290) 00:16:38.651 fused_ordering(291) 00:16:38.651 fused_ordering(292) 00:16:38.651 fused_ordering(293) 00:16:38.651 fused_ordering(294) 00:16:38.651 fused_ordering(295) 00:16:38.651 fused_ordering(296) 00:16:38.651 fused_ordering(297) 00:16:38.651 fused_ordering(298) 00:16:38.651 fused_ordering(299) 00:16:38.651 fused_ordering(300) 00:16:38.651 fused_ordering(301) 00:16:38.651 fused_ordering(302) 00:16:38.651 fused_ordering(303) 00:16:38.651 fused_ordering(304) 00:16:38.651 fused_ordering(305) 00:16:38.651 fused_ordering(306) 00:16:38.651 fused_ordering(307) 00:16:38.651 fused_ordering(308) 00:16:38.651 fused_ordering(309) 00:16:38.651 fused_ordering(310) 00:16:38.651 fused_ordering(311) 00:16:38.651 fused_ordering(312) 00:16:38.651 fused_ordering(313) 00:16:38.651 fused_ordering(314) 00:16:38.651 fused_ordering(315) 00:16:38.651 fused_ordering(316) 00:16:38.651 fused_ordering(317) 00:16:38.651 fused_ordering(318) 00:16:38.651 fused_ordering(319) 00:16:38.651 fused_ordering(320) 00:16:38.651 fused_ordering(321) 00:16:38.651 fused_ordering(322) 00:16:38.651 fused_ordering(323) 00:16:38.651 fused_ordering(324) 00:16:38.651 fused_ordering(325) 00:16:38.651 fused_ordering(326) 00:16:38.651 fused_ordering(327) 00:16:38.651 fused_ordering(328) 00:16:38.651 fused_ordering(329) 00:16:38.651 fused_ordering(330) 00:16:38.651 fused_ordering(331) 00:16:38.651 fused_ordering(332) 00:16:38.651 fused_ordering(333) 00:16:38.651 fused_ordering(334) 00:16:38.651 fused_ordering(335) 00:16:38.651 fused_ordering(336) 00:16:38.651 fused_ordering(337) 00:16:38.651 fused_ordering(338) 00:16:38.651 fused_ordering(339) 00:16:38.651 fused_ordering(340) 00:16:38.651 fused_ordering(341) 00:16:38.651 fused_ordering(342) 00:16:38.651 fused_ordering(343) 00:16:38.651 fused_ordering(344) 00:16:38.651 fused_ordering(345) 00:16:38.651 fused_ordering(346) 00:16:38.651 fused_ordering(347) 00:16:38.651 fused_ordering(348) 00:16:38.651 fused_ordering(349) 00:16:38.651 fused_ordering(350) 00:16:38.651 fused_ordering(351) 00:16:38.651 fused_ordering(352) 00:16:38.651 fused_ordering(353) 00:16:38.651 fused_ordering(354) 00:16:38.651 fused_ordering(355) 00:16:38.651 fused_ordering(356) 00:16:38.651 fused_ordering(357) 00:16:38.651 fused_ordering(358) 00:16:38.651 fused_ordering(359) 00:16:38.651 fused_ordering(360) 00:16:38.651 fused_ordering(361) 00:16:38.651 fused_ordering(362) 00:16:38.651 fused_ordering(363) 00:16:38.651 fused_ordering(364) 00:16:38.651 fused_ordering(365) 00:16:38.651 fused_ordering(366) 00:16:38.651 fused_ordering(367) 00:16:38.651 fused_ordering(368) 00:16:38.651 fused_ordering(369) 00:16:38.651 fused_ordering(370) 00:16:38.651 fused_ordering(371) 00:16:38.651 fused_ordering(372) 00:16:38.651 fused_ordering(373) 00:16:38.651 fused_ordering(374) 00:16:38.651 fused_ordering(375) 00:16:38.651 fused_ordering(376) 00:16:38.651 fused_ordering(377) 00:16:38.651 fused_ordering(378) 00:16:38.651 fused_ordering(379) 00:16:38.651 fused_ordering(380) 00:16:38.651 fused_ordering(381) 00:16:38.651 fused_ordering(382) 00:16:38.651 fused_ordering(383) 00:16:38.651 fused_ordering(384) 00:16:38.651 fused_ordering(385) 00:16:38.651 fused_ordering(386) 00:16:38.651 fused_ordering(387) 00:16:38.651 fused_ordering(388) 00:16:38.651 fused_ordering(389) 00:16:38.651 fused_ordering(390) 00:16:38.651 fused_ordering(391) 00:16:38.651 fused_ordering(392) 00:16:38.651 fused_ordering(393) 00:16:38.651 fused_ordering(394) 00:16:38.651 fused_ordering(395) 00:16:38.651 fused_ordering(396) 00:16:38.651 fused_ordering(397) 00:16:38.651 fused_ordering(398) 00:16:38.651 fused_ordering(399) 00:16:38.651 fused_ordering(400) 00:16:38.651 fused_ordering(401) 00:16:38.651 fused_ordering(402) 00:16:38.651 fused_ordering(403) 00:16:38.651 fused_ordering(404) 00:16:38.651 fused_ordering(405) 00:16:38.651 fused_ordering(406) 00:16:38.651 fused_ordering(407) 00:16:38.651 fused_ordering(408) 00:16:38.651 fused_ordering(409) 00:16:38.651 fused_ordering(410) 00:16:38.651 fused_ordering(411) 00:16:38.651 fused_ordering(412) 00:16:38.651 fused_ordering(413) 00:16:38.651 fused_ordering(414) 00:16:38.651 fused_ordering(415) 00:16:38.651 fused_ordering(416) 00:16:38.651 fused_ordering(417) 00:16:38.651 fused_ordering(418) 00:16:38.651 fused_ordering(419) 00:16:38.651 fused_ordering(420) 00:16:38.651 fused_ordering(421) 00:16:38.651 fused_ordering(422) 00:16:38.651 fused_ordering(423) 00:16:38.651 fused_ordering(424) 00:16:38.651 fused_ordering(425) 00:16:38.651 fused_ordering(426) 00:16:38.651 fused_ordering(427) 00:16:38.651 fused_ordering(428) 00:16:38.651 fused_ordering(429) 00:16:38.651 fused_ordering(430) 00:16:38.651 fused_ordering(431) 00:16:38.651 fused_ordering(432) 00:16:38.651 fused_ordering(433) 00:16:38.651 fused_ordering(434) 00:16:38.651 fused_ordering(435) 00:16:38.651 fused_ordering(436) 00:16:38.651 fused_ordering(437) 00:16:38.651 fused_ordering(438) 00:16:38.651 fused_ordering(439) 00:16:38.651 fused_ordering(440) 00:16:38.651 fused_ordering(441) 00:16:38.651 fused_ordering(442) 00:16:38.651 fused_ordering(443) 00:16:38.651 fused_ordering(444) 00:16:38.651 fused_ordering(445) 00:16:38.651 fused_ordering(446) 00:16:38.651 fused_ordering(447) 00:16:38.651 fused_ordering(448) 00:16:38.651 fused_ordering(449) 00:16:38.651 fused_ordering(450) 00:16:38.651 fused_ordering(451) 00:16:38.651 fused_ordering(452) 00:16:38.651 fused_ordering(453) 00:16:38.651 fused_ordering(454) 00:16:38.651 fused_ordering(455) 00:16:38.651 fused_ordering(456) 00:16:38.651 fused_ordering(457) 00:16:38.651 fused_ordering(458) 00:16:38.651 fused_ordering(459) 00:16:38.651 fused_ordering(460) 00:16:38.651 fused_ordering(461) 00:16:38.651 fused_ordering(462) 00:16:38.651 fused_ordering(463) 00:16:38.651 fused_ordering(464) 00:16:38.651 fused_ordering(465) 00:16:38.651 fused_ordering(466) 00:16:38.651 fused_ordering(467) 00:16:38.651 fused_ordering(468) 00:16:38.651 fused_ordering(469) 00:16:38.651 fused_ordering(470) 00:16:38.651 fused_ordering(471) 00:16:38.651 fused_ordering(472) 00:16:38.651 fused_ordering(473) 00:16:38.651 fused_ordering(474) 00:16:38.651 fused_ordering(475) 00:16:38.651 fused_ordering(476) 00:16:38.651 fused_ordering(477) 00:16:38.652 fused_ordering(478) 00:16:38.652 fused_ordering(479) 00:16:38.652 fused_ordering(480) 00:16:38.652 fused_ordering(481) 00:16:38.652 fused_ordering(482) 00:16:38.652 fused_ordering(483) 00:16:38.652 fused_ordering(484) 00:16:38.652 fused_ordering(485) 00:16:38.652 fused_ordering(486) 00:16:38.652 fused_ordering(487) 00:16:38.652 fused_ordering(488) 00:16:38.652 fused_ordering(489) 00:16:38.652 fused_ordering(490) 00:16:38.652 fused_ordering(491) 00:16:38.652 fused_ordering(492) 00:16:38.652 fused_ordering(493) 00:16:38.652 fused_ordering(494) 00:16:38.652 fused_ordering(495) 00:16:38.652 fused_ordering(496) 00:16:38.652 fused_ordering(497) 00:16:38.652 fused_ordering(498) 00:16:38.652 fused_ordering(499) 00:16:38.652 fused_ordering(500) 00:16:38.652 fused_ordering(501) 00:16:38.652 fused_ordering(502) 00:16:38.652 fused_ordering(503) 00:16:38.652 fused_ordering(504) 00:16:38.652 fused_ordering(505) 00:16:38.652 fused_ordering(506) 00:16:38.652 fused_ordering(507) 00:16:38.652 fused_ordering(508) 00:16:38.652 fused_ordering(509) 00:16:38.652 fused_ordering(510) 00:16:38.652 fused_ordering(511) 00:16:38.652 fused_ordering(512) 00:16:38.652 fused_ordering(513) 00:16:38.652 fused_ordering(514) 00:16:38.652 fused_ordering(515) 00:16:38.652 fused_ordering(516) 00:16:38.652 fused_ordering(517) 00:16:38.652 fused_ordering(518) 00:16:38.652 fused_ordering(519) 00:16:38.652 fused_ordering(520) 00:16:38.652 fused_ordering(521) 00:16:38.652 fused_ordering(522) 00:16:38.652 fused_ordering(523) 00:16:38.652 fused_ordering(524) 00:16:38.652 fused_ordering(525) 00:16:38.652 fused_ordering(526) 00:16:38.652 fused_ordering(527) 00:16:38.652 fused_ordering(528) 00:16:38.652 fused_ordering(529) 00:16:38.652 fused_ordering(530) 00:16:38.652 fused_ordering(531) 00:16:38.652 fused_ordering(532) 00:16:38.652 fused_ordering(533) 00:16:38.652 fused_ordering(534) 00:16:38.652 fused_ordering(535) 00:16:38.652 fused_ordering(536) 00:16:38.652 fused_ordering(537) 00:16:38.652 fused_ordering(538) 00:16:38.652 fused_ordering(539) 00:16:38.652 fused_ordering(540) 00:16:38.652 fused_ordering(541) 00:16:38.652 fused_ordering(542) 00:16:38.652 fused_ordering(543) 00:16:38.652 fused_ordering(544) 00:16:38.652 fused_ordering(545) 00:16:38.652 fused_ordering(546) 00:16:38.652 fused_ordering(547) 00:16:38.652 fused_ordering(548) 00:16:38.652 fused_ordering(549) 00:16:38.652 fused_ordering(550) 00:16:38.652 fused_ordering(551) 00:16:38.652 fused_ordering(552) 00:16:38.652 fused_ordering(553) 00:16:38.652 fused_ordering(554) 00:16:38.652 fused_ordering(555) 00:16:38.652 fused_ordering(556) 00:16:38.652 fused_ordering(557) 00:16:38.652 fused_ordering(558) 00:16:38.652 fused_ordering(559) 00:16:38.652 fused_ordering(560) 00:16:38.652 fused_ordering(561) 00:16:38.652 fused_ordering(562) 00:16:38.652 fused_ordering(563) 00:16:38.652 fused_ordering(564) 00:16:38.652 fused_ordering(565) 00:16:38.652 fused_ordering(566) 00:16:38.652 fused_ordering(567) 00:16:38.652 fused_ordering(568) 00:16:38.652 fused_ordering(569) 00:16:38.652 fused_ordering(570) 00:16:38.652 fused_ordering(571) 00:16:38.652 fused_ordering(572) 00:16:38.652 fused_ordering(573) 00:16:38.652 fused_ordering(574) 00:16:38.652 fused_ordering(575) 00:16:38.652 fused_ordering(576) 00:16:38.652 fused_ordering(577) 00:16:38.652 fused_ordering(578) 00:16:38.652 fused_ordering(579) 00:16:38.652 fused_ordering(580) 00:16:38.652 fused_ordering(581) 00:16:38.652 fused_ordering(582) 00:16:38.652 fused_ordering(583) 00:16:38.652 fused_ordering(584) 00:16:38.652 fused_ordering(585) 00:16:38.652 fused_ordering(586) 00:16:38.652 fused_ordering(587) 00:16:38.652 fused_ordering(588) 00:16:38.652 fused_ordering(589) 00:16:38.652 fused_ordering(590) 00:16:38.652 fused_ordering(591) 00:16:38.652 fused_ordering(592) 00:16:38.652 fused_ordering(593) 00:16:38.652 fused_ordering(594) 00:16:38.652 fused_ordering(595) 00:16:38.652 fused_ordering(596) 00:16:38.652 fused_ordering(597) 00:16:38.652 fused_ordering(598) 00:16:38.652 fused_ordering(599) 00:16:38.652 fused_ordering(600) 00:16:38.652 fused_ordering(601) 00:16:38.652 fused_ordering(602) 00:16:38.652 fused_ordering(603) 00:16:38.652 fused_ordering(604) 00:16:38.652 fused_ordering(605) 00:16:38.652 fused_ordering(606) 00:16:38.652 fused_ordering(607) 00:16:38.652 fused_ordering(608) 00:16:38.652 fused_ordering(609) 00:16:38.652 fused_ordering(610) 00:16:38.652 fused_ordering(611) 00:16:38.652 fused_ordering(612) 00:16:38.652 fused_ordering(613) 00:16:38.652 fused_ordering(614) 00:16:38.652 fused_ordering(615) 00:16:38.909 fused_ordering(616) 00:16:38.910 fused_ordering(617) 00:16:38.910 fused_ordering(618) 00:16:38.910 fused_ordering(619) 00:16:38.910 fused_ordering(620) 00:16:38.910 fused_ordering(621) 00:16:38.910 fused_ordering(622) 00:16:38.910 fused_ordering(623) 00:16:38.910 fused_ordering(624) 00:16:38.910 fused_ordering(625) 00:16:38.910 fused_ordering(626) 00:16:38.910 fused_ordering(627) 00:16:38.910 fused_ordering(628) 00:16:38.910 fused_ordering(629) 00:16:38.910 fused_ordering(630) 00:16:38.910 fused_ordering(631) 00:16:38.910 fused_ordering(632) 00:16:38.910 fused_ordering(633) 00:16:38.910 fused_ordering(634) 00:16:38.910 fused_ordering(635) 00:16:38.910 fused_ordering(636) 00:16:38.910 fused_ordering(637) 00:16:38.910 fused_ordering(638) 00:16:38.910 fused_ordering(639) 00:16:38.910 fused_ordering(640) 00:16:38.910 fused_ordering(641) 00:16:38.910 fused_ordering(642) 00:16:38.910 fused_ordering(643) 00:16:38.910 fused_ordering(644) 00:16:38.910 fused_ordering(645) 00:16:38.910 fused_ordering(646) 00:16:38.910 fused_ordering(647) 00:16:38.910 fused_ordering(648) 00:16:38.910 fused_ordering(649) 00:16:38.910 fused_ordering(650) 00:16:38.910 fused_ordering(651) 00:16:38.910 fused_ordering(652) 00:16:38.910 fused_ordering(653) 00:16:38.910 fused_ordering(654) 00:16:38.910 fused_ordering(655) 00:16:38.910 fused_ordering(656) 00:16:38.910 fused_ordering(657) 00:16:38.910 fused_ordering(658) 00:16:38.910 fused_ordering(659) 00:16:38.910 fused_ordering(660) 00:16:38.910 fused_ordering(661) 00:16:38.910 fused_ordering(662) 00:16:38.910 fused_ordering(663) 00:16:38.910 fused_ordering(664) 00:16:38.910 fused_ordering(665) 00:16:38.910 fused_ordering(666) 00:16:38.910 fused_ordering(667) 00:16:38.910 fused_ordering(668) 00:16:38.910 fused_ordering(669) 00:16:38.910 fused_ordering(670) 00:16:38.910 fused_ordering(671) 00:16:38.910 fused_ordering(672) 00:16:38.910 fused_ordering(673) 00:16:38.910 fused_ordering(674) 00:16:38.910 fused_ordering(675) 00:16:38.910 fused_ordering(676) 00:16:38.910 fused_ordering(677) 00:16:38.910 fused_ordering(678) 00:16:38.910 fused_ordering(679) 00:16:38.910 fused_ordering(680) 00:16:38.910 fused_ordering(681) 00:16:38.910 fused_ordering(682) 00:16:38.910 fused_ordering(683) 00:16:38.910 fused_ordering(684) 00:16:38.910 fused_ordering(685) 00:16:38.910 fused_ordering(686) 00:16:38.910 fused_ordering(687) 00:16:38.910 fused_ordering(688) 00:16:38.910 fused_ordering(689) 00:16:38.910 fused_ordering(690) 00:16:38.910 fused_ordering(691) 00:16:38.910 fused_ordering(692) 00:16:38.910 fused_ordering(693) 00:16:38.910 fused_ordering(694) 00:16:38.910 fused_ordering(695) 00:16:38.910 fused_ordering(696) 00:16:38.910 fused_ordering(697) 00:16:38.910 fused_ordering(698) 00:16:38.910 fused_ordering(699) 00:16:38.910 fused_ordering(700) 00:16:38.910 fused_ordering(701) 00:16:38.910 fused_ordering(702) 00:16:38.910 fused_ordering(703) 00:16:38.910 fused_ordering(704) 00:16:38.910 fused_ordering(705) 00:16:38.910 fused_ordering(706) 00:16:38.910 fused_ordering(707) 00:16:38.910 fused_ordering(708) 00:16:38.910 fused_ordering(709) 00:16:38.910 fused_ordering(710) 00:16:38.910 fused_ordering(711) 00:16:38.910 fused_ordering(712) 00:16:38.910 fused_ordering(713) 00:16:38.910 fused_ordering(714) 00:16:38.910 fused_ordering(715) 00:16:38.910 fused_ordering(716) 00:16:38.910 fused_ordering(717) 00:16:38.910 fused_ordering(718) 00:16:38.910 fused_ordering(719) 00:16:38.910 fused_ordering(720) 00:16:38.910 fused_ordering(721) 00:16:38.910 fused_ordering(722) 00:16:38.910 fused_ordering(723) 00:16:38.910 fused_ordering(724) 00:16:38.910 fused_ordering(725) 00:16:38.910 fused_ordering(726) 00:16:38.910 fused_ordering(727) 00:16:38.910 fused_ordering(728) 00:16:38.910 fused_ordering(729) 00:16:38.910 fused_ordering(730) 00:16:38.910 fused_ordering(731) 00:16:38.910 fused_ordering(732) 00:16:38.910 fused_ordering(733) 00:16:38.910 fused_ordering(734) 00:16:38.910 fused_ordering(735) 00:16:38.910 fused_ordering(736) 00:16:38.910 fused_ordering(737) 00:16:38.910 fused_ordering(738) 00:16:38.910 fused_ordering(739) 00:16:38.910 fused_ordering(740) 00:16:38.910 fused_ordering(741) 00:16:38.910 fused_ordering(742) 00:16:38.910 fused_ordering(743) 00:16:38.910 fused_ordering(744) 00:16:38.910 fused_ordering(745) 00:16:38.910 fused_ordering(746) 00:16:38.910 fused_ordering(747) 00:16:38.910 fused_ordering(748) 00:16:38.910 fused_ordering(749) 00:16:38.910 fused_ordering(750) 00:16:38.910 fused_ordering(751) 00:16:38.910 fused_ordering(752) 00:16:38.910 fused_ordering(753) 00:16:38.910 fused_ordering(754) 00:16:38.910 fused_ordering(755) 00:16:38.910 fused_ordering(756) 00:16:38.910 fused_ordering(757) 00:16:38.910 fused_ordering(758) 00:16:38.910 fused_ordering(759) 00:16:38.910 fused_ordering(760) 00:16:38.910 fused_ordering(761) 00:16:38.910 fused_ordering(762) 00:16:38.910 fused_ordering(763) 00:16:38.910 fused_ordering(764) 00:16:38.910 fused_ordering(765) 00:16:38.910 fused_ordering(766) 00:16:38.910 fused_ordering(767) 00:16:38.910 fused_ordering(768) 00:16:38.910 fused_ordering(769) 00:16:38.910 fused_ordering(770) 00:16:38.910 fused_ordering(771) 00:16:38.910 fused_ordering(772) 00:16:38.910 fused_ordering(773) 00:16:38.910 fused_ordering(774) 00:16:38.910 fused_ordering(775) 00:16:38.910 fused_ordering(776) 00:16:38.910 fused_ordering(777) 00:16:38.910 fused_ordering(778) 00:16:38.910 fused_ordering(779) 00:16:38.910 fused_ordering(780) 00:16:38.910 fused_ordering(781) 00:16:38.910 fused_ordering(782) 00:16:38.910 fused_ordering(783) 00:16:38.910 fused_ordering(784) 00:16:38.910 fused_ordering(785) 00:16:38.910 fused_ordering(786) 00:16:38.910 fused_ordering(787) 00:16:38.910 fused_ordering(788) 00:16:38.910 fused_ordering(789) 00:16:38.910 fused_ordering(790) 00:16:38.910 fused_ordering(791) 00:16:38.910 fused_ordering(792) 00:16:38.910 fused_ordering(793) 00:16:38.910 fused_ordering(794) 00:16:38.910 fused_ordering(795) 00:16:38.910 fused_ordering(796) 00:16:38.910 fused_ordering(797) 00:16:38.910 fused_ordering(798) 00:16:38.910 fused_ordering(799) 00:16:38.910 fused_ordering(800) 00:16:38.910 fused_ordering(801) 00:16:38.910 fused_ordering(802) 00:16:38.910 fused_ordering(803) 00:16:38.910 fused_ordering(804) 00:16:38.910 fused_ordering(805) 00:16:38.910 fused_ordering(806) 00:16:38.910 fused_ordering(807) 00:16:38.910 fused_ordering(808) 00:16:38.910 fused_ordering(809) 00:16:38.910 fused_ordering(810) 00:16:38.910 fused_ordering(811) 00:16:38.910 fused_ordering(812) 00:16:38.910 fused_ordering(813) 00:16:38.910 fused_ordering(814) 00:16:38.910 fused_ordering(815) 00:16:38.910 fused_ordering(816) 00:16:38.910 fused_ordering(817) 00:16:38.910 fused_ordering(818) 00:16:38.910 fused_ordering(819) 00:16:38.910 fused_ordering(820) 00:16:38.910 fused_ordering(821) 00:16:38.910 fused_ordering(822) 00:16:38.910 fused_ordering(823) 00:16:38.910 fused_ordering(824) 00:16:38.910 fused_ordering(825) 00:16:38.910 fused_ordering(826) 00:16:38.910 fused_ordering(827) 00:16:38.910 fused_ordering(828) 00:16:38.910 fused_ordering(829) 00:16:38.910 fused_ordering(830) 00:16:38.910 fused_ordering(831) 00:16:38.910 fused_ordering(832) 00:16:38.910 fused_ordering(833) 00:16:38.910 fused_ordering(834) 00:16:38.910 fused_ordering(835) 00:16:38.910 fused_ordering(836) 00:16:38.910 fused_ordering(837) 00:16:38.910 fused_ordering(838) 00:16:38.910 fused_ordering(839) 00:16:38.910 fused_ordering(840) 00:16:38.910 fused_ordering(841) 00:16:38.910 fused_ordering(842) 00:16:38.910 fused_ordering(843) 00:16:38.910 fused_ordering(844) 00:16:38.910 fused_ordering(845) 00:16:38.910 fused_ordering(846) 00:16:38.910 fused_ordering(847) 00:16:38.910 fused_ordering(848) 00:16:38.910 fused_ordering(849) 00:16:38.910 fused_ordering(850) 00:16:38.910 fused_ordering(851) 00:16:38.910 fused_ordering(852) 00:16:38.910 fused_ordering(853) 00:16:38.910 fused_ordering(854) 00:16:38.910 fused_ordering(855) 00:16:38.910 fused_ordering(856) 00:16:38.910 fused_ordering(857) 00:16:38.910 fused_ordering(858) 00:16:38.910 fused_ordering(859) 00:16:38.910 fused_ordering(860) 00:16:38.910 fused_ordering(861) 00:16:38.910 fused_ordering(862) 00:16:38.910 fused_ordering(863) 00:16:38.910 fused_ordering(864) 00:16:38.910 fused_ordering(865) 00:16:38.910 fused_ordering(866) 00:16:38.910 fused_ordering(867) 00:16:38.910 fused_ordering(868) 00:16:38.910 fused_ordering(869) 00:16:38.910 fused_ordering(870) 00:16:38.910 fused_ordering(871) 00:16:38.910 fused_ordering(872) 00:16:38.910 fused_ordering(873) 00:16:38.910 fused_ordering(874) 00:16:38.910 fused_ordering(875) 00:16:38.910 fused_ordering(876) 00:16:38.910 fused_ordering(877) 00:16:38.910 fused_ordering(878) 00:16:38.910 fused_ordering(879) 00:16:38.910 fused_ordering(880) 00:16:38.910 fused_ordering(881) 00:16:38.910 fused_ordering(882) 00:16:38.910 fused_ordering(883) 00:16:38.910 fused_ordering(884) 00:16:38.910 fused_ordering(885) 00:16:38.910 fused_ordering(886) 00:16:38.910 fused_ordering(887) 00:16:38.910 fused_ordering(888) 00:16:38.910 fused_ordering(889) 00:16:38.910 fused_ordering(890) 00:16:38.910 fused_ordering(891) 00:16:38.910 fused_ordering(892) 00:16:38.910 fused_ordering(893) 00:16:38.910 fused_ordering(894) 00:16:38.910 fused_ordering(895) 00:16:38.910 fused_ordering(896) 00:16:38.911 fused_ordering(897) 00:16:38.911 fused_ordering(898) 00:16:38.911 fused_ordering(899) 00:16:38.911 fused_ordering(900) 00:16:38.911 fused_ordering(901) 00:16:38.911 fused_ordering(902) 00:16:38.911 fused_ordering(903) 00:16:38.911 fused_ordering(904) 00:16:38.911 fused_ordering(905) 00:16:38.911 fused_ordering(906) 00:16:38.911 fused_ordering(907) 00:16:38.911 fused_ordering(908) 00:16:38.911 fused_ordering(909) 00:16:38.911 fused_ordering(910) 00:16:38.911 fused_ordering(911) 00:16:38.911 fused_ordering(912) 00:16:38.911 fused_ordering(913) 00:16:38.911 fused_ordering(914) 00:16:38.911 fused_ordering(915) 00:16:38.911 fused_ordering(916) 00:16:38.911 fused_ordering(917) 00:16:38.911 fused_ordering(918) 00:16:38.911 fused_ordering(919) 00:16:38.911 fused_ordering(920) 00:16:38.911 fused_ordering(921) 00:16:38.911 fused_ordering(922) 00:16:38.911 fused_ordering(923) 00:16:38.911 fused_ordering(924) 00:16:38.911 fused_ordering(925) 00:16:38.911 fused_ordering(926) 00:16:38.911 fused_ordering(927) 00:16:38.911 fused_ordering(928) 00:16:38.911 fused_ordering(929) 00:16:38.911 fused_ordering(930) 00:16:38.911 fused_ordering(931) 00:16:38.911 fused_ordering(932) 00:16:38.911 fused_ordering(933) 00:16:38.911 fused_ordering(934) 00:16:38.911 fused_ordering(935) 00:16:38.911 fused_ordering(936) 00:16:38.911 fused_ordering(937) 00:16:38.911 fused_ordering(938) 00:16:38.911 fused_ordering(939) 00:16:38.911 fused_ordering(940) 00:16:38.911 fused_ordering(941) 00:16:38.911 fused_ordering(942) 00:16:38.911 fused_ordering(943) 00:16:38.911 fused_ordering(944) 00:16:38.911 fused_ordering(945) 00:16:38.911 fused_ordering(946) 00:16:38.911 fused_ordering(947) 00:16:38.911 fused_ordering(948) 00:16:38.911 fused_ordering(949) 00:16:38.911 fused_ordering(950) 00:16:38.911 fused_ordering(951) 00:16:38.911 fused_ordering(952) 00:16:38.911 fused_ordering(953) 00:16:38.911 fused_ordering(954) 00:16:38.911 fused_ordering(955) 00:16:38.911 fused_ordering(956) 00:16:38.911 fused_ordering(957) 00:16:38.911 fused_ordering(958) 00:16:38.911 fused_ordering(959) 00:16:38.911 fused_ordering(960) 00:16:38.911 fused_ordering(961) 00:16:38.911 fused_ordering(962) 00:16:38.911 fused_ordering(963) 00:16:38.911 fused_ordering(964) 00:16:38.911 fused_ordering(965) 00:16:38.911 fused_ordering(966) 00:16:38.911 fused_ordering(967) 00:16:38.911 fused_ordering(968) 00:16:38.911 fused_ordering(969) 00:16:38.911 fused_ordering(970) 00:16:38.911 fused_ordering(971) 00:16:38.911 fused_ordering(972) 00:16:38.911 fused_ordering(973) 00:16:38.911 fused_ordering(974) 00:16:38.911 fused_ordering(975) 00:16:38.911 fused_ordering(976) 00:16:38.911 fused_ordering(977) 00:16:38.911 fused_ordering(978) 00:16:38.911 fused_ordering(979) 00:16:38.911 fused_ordering(980) 00:16:38.911 fused_ordering(981) 00:16:38.911 fused_ordering(982) 00:16:38.911 fused_ordering(983) 00:16:38.911 fused_ordering(984) 00:16:38.911 fused_ordering(985) 00:16:38.911 fused_ordering(986) 00:16:38.911 fused_ordering(987) 00:16:38.911 fused_ordering(988) 00:16:38.911 fused_ordering(989) 00:16:38.911 fused_ordering(990) 00:16:38.911 fused_ordering(991) 00:16:38.911 fused_ordering(992) 00:16:38.911 fused_ordering(993) 00:16:38.911 fused_ordering(994) 00:16:38.911 fused_ordering(995) 00:16:38.911 fused_ordering(996) 00:16:38.911 fused_ordering(997) 00:16:38.911 fused_ordering(998) 00:16:38.911 fused_ordering(999) 00:16:38.911 fused_ordering(1000) 00:16:38.911 fused_ordering(1001) 00:16:38.911 fused_ordering(1002) 00:16:38.911 fused_ordering(1003) 00:16:38.911 fused_ordering(1004) 00:16:38.911 fused_ordering(1005) 00:16:38.911 fused_ordering(1006) 00:16:38.911 fused_ordering(1007) 00:16:38.911 fused_ordering(1008) 00:16:38.911 fused_ordering(1009) 00:16:38.911 fused_ordering(1010) 00:16:38.911 fused_ordering(1011) 00:16:38.911 fused_ordering(1012) 00:16:38.911 fused_ordering(1013) 00:16:38.911 fused_ordering(1014) 00:16:38.911 fused_ordering(1015) 00:16:38.911 fused_ordering(1016) 00:16:38.911 fused_ordering(1017) 00:16:38.911 fused_ordering(1018) 00:16:38.911 fused_ordering(1019) 00:16:38.911 fused_ordering(1020) 00:16:38.911 fused_ordering(1021) 00:16:38.911 fused_ordering(1022) 00:16:38.911 fused_ordering(1023) 00:16:38.911 11:40:08 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:38.911 11:40:08 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:38.911 11:40:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:38.911 11:40:08 -- nvmf/common.sh@116 -- # sync 00:16:39.170 11:40:08 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:39.171 11:40:08 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:39.171 11:40:08 -- nvmf/common.sh@119 -- # set +e 00:16:39.171 11:40:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:39.171 11:40:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:39.171 rmmod nvme_rdma 00:16:39.171 rmmod nvme_fabrics 00:16:39.171 11:40:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:39.171 11:40:08 -- nvmf/common.sh@123 -- # set -e 00:16:39.171 11:40:08 -- nvmf/common.sh@124 -- # return 0 00:16:39.171 11:40:08 -- nvmf/common.sh@477 -- # '[' -n 2324620 ']' 00:16:39.171 11:40:08 -- nvmf/common.sh@478 -- # killprocess 2324620 00:16:39.171 11:40:08 -- common/autotest_common.sh@926 -- # '[' -z 2324620 ']' 00:16:39.171 11:40:08 -- common/autotest_common.sh@930 -- # kill -0 2324620 00:16:39.171 11:40:08 -- common/autotest_common.sh@931 -- # uname 00:16:39.171 11:40:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:39.171 11:40:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2324620 00:16:39.171 11:40:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:39.171 11:40:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:39.171 11:40:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2324620' 00:16:39.171 killing process with pid 2324620 00:16:39.171 11:40:08 -- common/autotest_common.sh@945 -- # kill 2324620 00:16:39.171 11:40:08 -- common/autotest_common.sh@950 -- # wait 2324620 00:16:39.430 11:40:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:39.430 11:40:08 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:39.430 00:16:39.430 real 0m10.458s 00:16:39.430 user 0m5.063s 00:16:39.430 sys 0m6.782s 00:16:39.430 11:40:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.430 11:40:08 -- common/autotest_common.sh@10 -- # set +x 00:16:39.430 ************************************ 00:16:39.430 END TEST nvmf_fused_ordering 00:16:39.430 ************************************ 00:16:39.430 11:40:08 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:39.430 11:40:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:39.430 11:40:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:39.430 11:40:08 -- common/autotest_common.sh@10 -- # set +x 00:16:39.430 ************************************ 00:16:39.430 START TEST nvmf_delete_subsystem 00:16:39.430 ************************************ 00:16:39.430 11:40:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:39.430 * Looking for test storage... 00:16:39.430 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:39.430 11:40:08 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.430 11:40:08 -- nvmf/common.sh@7 -- # uname -s 00:16:39.430 11:40:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.430 11:40:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.430 11:40:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.430 11:40:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.430 11:40:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.430 11:40:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.430 11:40:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.430 11:40:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.430 11:40:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.430 11:40:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.430 11:40:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:39.430 11:40:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:39.430 11:40:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.430 11:40:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.430 11:40:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.430 11:40:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:39.430 11:40:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.430 11:40:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.430 11:40:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.430 11:40:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.430 11:40:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.430 11:40:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.430 11:40:08 -- paths/export.sh@5 -- # export PATH 00:16:39.430 11:40:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.430 11:40:08 -- nvmf/common.sh@46 -- # : 0 00:16:39.430 11:40:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:39.430 11:40:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:39.430 11:40:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:39.430 11:40:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.430 11:40:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.430 11:40:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:39.430 11:40:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:39.430 11:40:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:39.430 11:40:08 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:39.430 11:40:08 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:39.430 11:40:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.430 11:40:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:39.430 11:40:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:39.430 11:40:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:39.430 11:40:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.430 11:40:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.430 11:40:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.430 11:40:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:39.430 11:40:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:39.430 11:40:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:39.430 11:40:08 -- common/autotest_common.sh@10 -- # set +x 00:16:47.533 11:40:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:47.533 11:40:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:47.533 11:40:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:47.533 11:40:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:47.533 11:40:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:47.533 11:40:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:47.533 11:40:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:47.533 11:40:16 -- nvmf/common.sh@294 -- # net_devs=() 00:16:47.533 11:40:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:47.533 11:40:16 -- nvmf/common.sh@295 -- # e810=() 00:16:47.533 11:40:16 -- nvmf/common.sh@295 -- # local -ga e810 00:16:47.533 11:40:16 -- nvmf/common.sh@296 -- # x722=() 00:16:47.533 11:40:16 -- nvmf/common.sh@296 -- # local -ga x722 00:16:47.533 11:40:16 -- nvmf/common.sh@297 -- # mlx=() 00:16:47.533 11:40:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:47.533 11:40:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:47.533 11:40:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:47.533 11:40:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:47.533 11:40:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:47.533 11:40:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:47.533 11:40:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:47.533 11:40:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:47.533 11:40:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:47.533 11:40:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:47.533 11:40:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:47.533 11:40:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:47.533 11:40:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:47.533 11:40:16 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:47.533 11:40:16 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:47.533 11:40:16 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:47.533 11:40:16 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:47.533 11:40:16 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:47.533 11:40:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:47.533 11:40:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:47.533 11:40:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:47.533 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:47.533 11:40:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:47.533 11:40:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:47.533 11:40:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:47.533 11:40:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:47.533 11:40:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:47.533 11:40:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:47.533 11:40:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:47.533 11:40:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:47.533 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:47.533 11:40:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:47.533 11:40:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:47.533 11:40:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:47.533 11:40:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:47.533 11:40:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:47.533 11:40:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:47.533 11:40:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:47.533 11:40:16 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:47.533 11:40:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:47.533 11:40:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.533 11:40:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:47.533 11:40:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.533 11:40:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:47.533 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:47.533 11:40:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.533 11:40:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:47.533 11:40:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.533 11:40:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:47.533 11:40:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.533 11:40:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:47.533 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:47.533 11:40:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.533 11:40:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:47.533 11:40:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:47.533 11:40:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:47.533 11:40:16 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:47.533 11:40:16 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:47.533 11:40:16 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:47.533 11:40:16 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:47.533 11:40:16 -- nvmf/common.sh@57 -- # uname 00:16:47.533 11:40:16 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:47.533 11:40:16 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:47.533 11:40:16 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:47.533 11:40:16 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:47.533 11:40:16 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:47.533 11:40:16 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:47.533 11:40:16 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:47.533 11:40:16 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:47.790 11:40:16 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:47.790 11:40:16 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:47.790 11:40:16 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:47.790 11:40:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:47.790 11:40:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:47.790 11:40:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:47.790 11:40:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:47.790 11:40:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:47.790 11:40:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:47.790 11:40:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.790 11:40:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:47.790 11:40:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:47.790 11:40:16 -- nvmf/common.sh@104 -- # continue 2 00:16:47.790 11:40:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:47.790 11:40:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.790 11:40:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:47.790 11:40:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.790 11:40:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:47.790 11:40:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:47.790 11:40:16 -- nvmf/common.sh@104 -- # continue 2 00:16:47.790 11:40:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:47.790 11:40:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:47.790 11:40:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:47.790 11:40:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:47.790 11:40:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:47.790 11:40:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:47.790 11:40:17 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:47.790 11:40:17 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:47.790 11:40:17 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:47.790 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:47.790 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:47.790 altname enp217s0f0np0 00:16:47.790 altname ens818f0np0 00:16:47.790 inet 192.168.100.8/24 scope global mlx_0_0 00:16:47.790 valid_lft forever preferred_lft forever 00:16:47.790 11:40:17 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:47.790 11:40:17 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:47.790 11:40:17 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:47.790 11:40:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:47.790 11:40:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:47.790 11:40:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:47.790 11:40:17 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:47.790 11:40:17 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:47.790 11:40:17 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:47.790 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:47.790 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:47.790 altname enp217s0f1np1 00:16:47.790 altname ens818f1np1 00:16:47.790 inet 192.168.100.9/24 scope global mlx_0_1 00:16:47.790 valid_lft forever preferred_lft forever 00:16:47.790 11:40:17 -- nvmf/common.sh@410 -- # return 0 00:16:47.790 11:40:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:47.790 11:40:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:47.790 11:40:17 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:47.790 11:40:17 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:47.790 11:40:17 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:47.790 11:40:17 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:47.790 11:40:17 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:47.790 11:40:17 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:47.790 11:40:17 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:47.790 11:40:17 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:47.790 11:40:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:47.790 11:40:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.790 11:40:17 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:47.790 11:40:17 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:47.790 11:40:17 -- nvmf/common.sh@104 -- # continue 2 00:16:47.790 11:40:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:47.790 11:40:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.790 11:40:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:47.790 11:40:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.790 11:40:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:47.790 11:40:17 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:47.790 11:40:17 -- nvmf/common.sh@104 -- # continue 2 00:16:47.790 11:40:17 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:47.790 11:40:17 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:47.790 11:40:17 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:47.790 11:40:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:47.790 11:40:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:47.790 11:40:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:47.790 11:40:17 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:47.790 11:40:17 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:47.790 11:40:17 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:47.790 11:40:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:47.790 11:40:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:47.790 11:40:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:47.790 11:40:17 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:47.790 192.168.100.9' 00:16:47.790 11:40:17 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:47.790 192.168.100.9' 00:16:47.790 11:40:17 -- nvmf/common.sh@445 -- # head -n 1 00:16:47.790 11:40:17 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:47.790 11:40:17 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:47.790 192.168.100.9' 00:16:47.790 11:40:17 -- nvmf/common.sh@446 -- # tail -n +2 00:16:47.790 11:40:17 -- nvmf/common.sh@446 -- # head -n 1 00:16:47.790 11:40:17 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:47.790 11:40:17 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:47.790 11:40:17 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:47.790 11:40:17 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:47.790 11:40:17 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:47.790 11:40:17 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:47.790 11:40:17 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:47.790 11:40:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:47.790 11:40:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:47.790 11:40:17 -- common/autotest_common.sh@10 -- # set +x 00:16:47.790 11:40:17 -- nvmf/common.sh@469 -- # nvmfpid=2328915 00:16:47.790 11:40:17 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:47.790 11:40:17 -- nvmf/common.sh@470 -- # waitforlisten 2328915 00:16:47.790 11:40:17 -- common/autotest_common.sh@819 -- # '[' -z 2328915 ']' 00:16:47.790 11:40:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.790 11:40:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:47.790 11:40:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.791 11:40:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:47.791 11:40:17 -- common/autotest_common.sh@10 -- # set +x 00:16:47.791 [2024-07-21 11:40:17.206412] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:47.791 [2024-07-21 11:40:17.206471] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.047 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.047 [2024-07-21 11:40:17.292642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:48.047 [2024-07-21 11:40:17.329376] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:48.047 [2024-07-21 11:40:17.329493] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.047 [2024-07-21 11:40:17.329505] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.047 [2024-07-21 11:40:17.329515] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.047 [2024-07-21 11:40:17.329566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.047 [2024-07-21 11:40:17.329568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.610 11:40:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:48.610 11:40:18 -- common/autotest_common.sh@852 -- # return 0 00:16:48.610 11:40:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:48.610 11:40:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:48.610 11:40:18 -- common/autotest_common.sh@10 -- # set +x 00:16:48.867 11:40:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.867 11:40:18 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:48.867 11:40:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.867 11:40:18 -- common/autotest_common.sh@10 -- # set +x 00:16:48.867 [2024-07-21 11:40:18.081361] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xccde80/0xcd2370) succeed. 00:16:48.867 [2024-07-21 11:40:18.090431] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xccf380/0xd13a00) succeed. 00:16:48.867 11:40:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.867 11:40:18 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:48.867 11:40:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.867 11:40:18 -- common/autotest_common.sh@10 -- # set +x 00:16:48.867 11:40:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.867 11:40:18 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:48.867 11:40:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.867 11:40:18 -- common/autotest_common.sh@10 -- # set +x 00:16:48.867 [2024-07-21 11:40:18.174425] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:48.867 11:40:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.867 11:40:18 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:48.867 11:40:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.867 11:40:18 -- common/autotest_common.sh@10 -- # set +x 00:16:48.867 NULL1 00:16:48.867 11:40:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.867 11:40:18 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:48.867 11:40:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.867 11:40:18 -- common/autotest_common.sh@10 -- # set +x 00:16:48.867 Delay0 00:16:48.867 11:40:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.867 11:40:18 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:48.867 11:40:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.867 11:40:18 -- common/autotest_common.sh@10 -- # set +x 00:16:48.867 11:40:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.867 11:40:18 -- target/delete_subsystem.sh@28 -- # perf_pid=2329193 00:16:48.867 11:40:18 -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:48.867 11:40:18 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:48.867 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.867 [2024-07-21 11:40:18.277182] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:51.387 11:40:20 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:51.387 11:40:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:51.387 11:40:20 -- common/autotest_common.sh@10 -- # set +x 00:16:51.951 NVMe io qpair process completion error 00:16:51.951 NVMe io qpair process completion error 00:16:51.951 NVMe io qpair process completion error 00:16:51.951 NVMe io qpair process completion error 00:16:51.951 NVMe io qpair process completion error 00:16:51.951 NVMe io qpair process completion error 00:16:51.951 11:40:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:51.951 11:40:21 -- target/delete_subsystem.sh@34 -- # delay=0 00:16:51.951 11:40:21 -- target/delete_subsystem.sh@35 -- # kill -0 2329193 00:16:51.951 11:40:21 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:52.515 11:40:21 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:52.515 11:40:21 -- target/delete_subsystem.sh@35 -- # kill -0 2329193 00:16:52.515 11:40:21 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:53.079 Read completed with error (sct=0, sc=8) 00:16:53.079 starting I/O failed: -6 00:16:53.079 Write completed with error (sct=0, sc=8) 00:16:53.079 starting I/O failed: -6 00:16:53.079 Read completed with error (sct=0, sc=8) 00:16:53.079 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Read completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.080 Write completed with error (sct=0, sc=8) 00:16:53.080 starting I/O failed: -6 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 starting I/O failed: -6 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 starting I/O failed: -6 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 starting I/O failed: -6 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 Read completed with error (sct=0, sc=8) 00:16:53.081 Write completed with error (sct=0, sc=8) 00:16:53.081 11:40:22 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:53.081 11:40:22 -- target/delete_subsystem.sh@35 -- # kill -0 2329193 00:16:53.081 11:40:22 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:53.081 [2024-07-21 11:40:22.375514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:53.081 [2024-07-21 11:40:22.375558] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:53.081 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:53.081 Initializing NVMe Controllers 00:16:53.081 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:53.081 Controller IO queue size 128, less than required. 00:16:53.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:53.081 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:53.081 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:53.081 Initialization complete. Launching workers. 00:16:53.081 ======================================================== 00:16:53.081 Latency(us) 00:16:53.081 Device Information : IOPS MiB/s Average min max 00:16:53.081 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.45 0.04 1594202.63 1000115.04 2978419.34 00:16:53.081 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.45 0.04 1595838.79 1000718.06 2979417.25 00:16:53.081 ======================================================== 00:16:53.081 Total : 160.91 0.08 1595020.71 1000115.04 2979417.25 00:16:53.081 00:16:53.645 11:40:22 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:53.645 11:40:22 -- target/delete_subsystem.sh@35 -- # kill -0 2329193 00:16:53.645 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2329193) - No such process 00:16:53.645 11:40:22 -- target/delete_subsystem.sh@45 -- # NOT wait 2329193 00:16:53.645 11:40:22 -- common/autotest_common.sh@640 -- # local es=0 00:16:53.645 11:40:22 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 2329193 00:16:53.645 11:40:22 -- common/autotest_common.sh@628 -- # local arg=wait 00:16:53.645 11:40:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:53.645 11:40:22 -- common/autotest_common.sh@632 -- # type -t wait 00:16:53.645 11:40:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:53.645 11:40:22 -- common/autotest_common.sh@643 -- # wait 2329193 00:16:53.645 11:40:22 -- common/autotest_common.sh@643 -- # es=1 00:16:53.645 11:40:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:53.645 11:40:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:53.645 11:40:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:53.645 11:40:22 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:53.645 11:40:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.645 11:40:22 -- common/autotest_common.sh@10 -- # set +x 00:16:53.645 11:40:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.645 11:40:22 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:53.645 11:40:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.645 11:40:22 -- common/autotest_common.sh@10 -- # set +x 00:16:53.645 [2024-07-21 11:40:22.897042] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:53.645 11:40:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.645 11:40:22 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:53.645 11:40:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.645 11:40:22 -- common/autotest_common.sh@10 -- # set +x 00:16:53.645 11:40:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.645 11:40:22 -- target/delete_subsystem.sh@54 -- # perf_pid=2330012 00:16:53.645 11:40:22 -- target/delete_subsystem.sh@56 -- # delay=0 00:16:53.645 11:40:22 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:53.645 11:40:22 -- target/delete_subsystem.sh@57 -- # kill -0 2330012 00:16:53.645 11:40:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:53.645 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.645 [2024-07-21 11:40:22.982511] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:54.209 11:40:23 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:54.209 11:40:23 -- target/delete_subsystem.sh@57 -- # kill -0 2330012 00:16:54.209 11:40:23 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:54.773 11:40:23 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:54.773 11:40:23 -- target/delete_subsystem.sh@57 -- # kill -0 2330012 00:16:54.773 11:40:23 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:55.029 11:40:24 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:55.029 11:40:24 -- target/delete_subsystem.sh@57 -- # kill -0 2330012 00:16:55.030 11:40:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:55.591 11:40:24 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:55.591 11:40:24 -- target/delete_subsystem.sh@57 -- # kill -0 2330012 00:16:55.591 11:40:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:56.154 11:40:25 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:56.154 11:40:25 -- target/delete_subsystem.sh@57 -- # kill -0 2330012 00:16:56.154 11:40:25 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:56.716 11:40:25 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:56.716 11:40:25 -- target/delete_subsystem.sh@57 -- # kill -0 2330012 00:16:56.716 11:40:25 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:57.280 11:40:26 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:57.280 11:40:26 -- target/delete_subsystem.sh@57 -- # kill -0 2330012 00:16:57.280 11:40:26 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:57.537 11:40:26 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:57.537 11:40:26 -- target/delete_subsystem.sh@57 -- # kill -0 2330012 00:16:57.537 11:40:26 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:58.101 11:40:27 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:58.101 11:40:27 -- target/delete_subsystem.sh@57 -- # kill -0 2330012 00:16:58.101 11:40:27 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:58.663 11:40:27 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:58.663 11:40:27 -- target/delete_subsystem.sh@57 -- # kill -0 2330012 00:16:58.663 11:40:27 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:59.255 11:40:28 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:59.255 11:40:28 -- target/delete_subsystem.sh@57 -- # kill -0 2330012 00:16:59.255 11:40:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:59.836 11:40:28 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:59.836 11:40:28 -- target/delete_subsystem.sh@57 -- # kill -0 2330012 00:16:59.836 11:40:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:00.093 11:40:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:00.093 11:40:29 -- target/delete_subsystem.sh@57 -- # kill -0 2330012 00:17:00.093 11:40:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:00.656 11:40:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:00.656 11:40:29 -- target/delete_subsystem.sh@57 -- # kill -0 2330012 00:17:00.656 11:40:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:00.913 Initializing NVMe Controllers 00:17:00.913 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:00.913 Controller IO queue size 128, less than required. 00:17:00.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:00.913 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:00.913 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:00.913 Initialization complete. Launching workers. 00:17:00.913 ======================================================== 00:17:00.913 Latency(us) 00:17:00.913 Device Information : IOPS MiB/s Average min max 00:17:00.913 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001186.42 1000056.66 1003874.81 00:17:00.913 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002328.45 1000060.70 1005642.60 00:17:00.913 ======================================================== 00:17:00.913 Total : 256.00 0.12 1001757.44 1000056.66 1005642.60 00:17:00.913 00:17:01.169 11:40:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:01.169 11:40:30 -- target/delete_subsystem.sh@57 -- # kill -0 2330012 00:17:01.169 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2330012) - No such process 00:17:01.169 11:40:30 -- target/delete_subsystem.sh@67 -- # wait 2330012 00:17:01.169 11:40:30 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:01.169 11:40:30 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:17:01.169 11:40:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:01.169 11:40:30 -- nvmf/common.sh@116 -- # sync 00:17:01.169 11:40:30 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:01.169 11:40:30 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:01.169 11:40:30 -- nvmf/common.sh@119 -- # set +e 00:17:01.169 11:40:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:01.169 11:40:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:01.169 rmmod nvme_rdma 00:17:01.169 rmmod nvme_fabrics 00:17:01.169 11:40:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:01.169 11:40:30 -- nvmf/common.sh@123 -- # set -e 00:17:01.169 11:40:30 -- nvmf/common.sh@124 -- # return 0 00:17:01.169 11:40:30 -- nvmf/common.sh@477 -- # '[' -n 2328915 ']' 00:17:01.169 11:40:30 -- nvmf/common.sh@478 -- # killprocess 2328915 00:17:01.169 11:40:30 -- common/autotest_common.sh@926 -- # '[' -z 2328915 ']' 00:17:01.169 11:40:30 -- common/autotest_common.sh@930 -- # kill -0 2328915 00:17:01.169 11:40:30 -- common/autotest_common.sh@931 -- # uname 00:17:01.169 11:40:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:01.169 11:40:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2328915 00:17:01.425 11:40:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:01.425 11:40:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:01.425 11:40:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2328915' 00:17:01.425 killing process with pid 2328915 00:17:01.425 11:40:30 -- common/autotest_common.sh@945 -- # kill 2328915 00:17:01.425 11:40:30 -- common/autotest_common.sh@950 -- # wait 2328915 00:17:01.425 11:40:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:01.425 11:40:30 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:01.425 00:17:01.425 real 0m22.132s 00:17:01.425 user 0m50.438s 00:17:01.425 sys 0m7.701s 00:17:01.425 11:40:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.425 11:40:30 -- common/autotest_common.sh@10 -- # set +x 00:17:01.425 ************************************ 00:17:01.425 END TEST nvmf_delete_subsystem 00:17:01.425 ************************************ 00:17:01.683 11:40:30 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:17:01.683 11:40:30 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:01.683 11:40:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:01.683 11:40:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:01.683 11:40:30 -- common/autotest_common.sh@10 -- # set +x 00:17:01.683 ************************************ 00:17:01.683 START TEST nvmf_nvme_cli 00:17:01.683 ************************************ 00:17:01.683 11:40:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:01.683 * Looking for test storage... 00:17:01.683 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:01.683 11:40:30 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.683 11:40:30 -- nvmf/common.sh@7 -- # uname -s 00:17:01.683 11:40:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.683 11:40:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.683 11:40:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.683 11:40:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.683 11:40:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.683 11:40:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.683 11:40:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.683 11:40:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.683 11:40:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.683 11:40:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.683 11:40:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:01.683 11:40:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:01.683 11:40:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.683 11:40:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.683 11:40:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.683 11:40:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:01.683 11:40:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.683 11:40:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.683 11:40:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.683 11:40:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.683 11:40:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.683 11:40:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.683 11:40:30 -- paths/export.sh@5 -- # export PATH 00:17:01.683 11:40:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.683 11:40:30 -- nvmf/common.sh@46 -- # : 0 00:17:01.683 11:40:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:01.683 11:40:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:01.683 11:40:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:01.683 11:40:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.683 11:40:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.683 11:40:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:01.683 11:40:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:01.683 11:40:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:01.683 11:40:30 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:01.683 11:40:30 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:01.683 11:40:30 -- target/nvme_cli.sh@14 -- # devs=() 00:17:01.683 11:40:30 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:01.683 11:40:30 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:01.683 11:40:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.683 11:40:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:01.683 11:40:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:01.683 11:40:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:01.683 11:40:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.683 11:40:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.683 11:40:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.683 11:40:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:01.683 11:40:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:01.683 11:40:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:01.683 11:40:31 -- common/autotest_common.sh@10 -- # set +x 00:17:11.647 11:40:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:11.647 11:40:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:11.647 11:40:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:11.647 11:40:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:11.647 11:40:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:11.647 11:40:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:11.647 11:40:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:11.647 11:40:39 -- nvmf/common.sh@294 -- # net_devs=() 00:17:11.647 11:40:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:11.647 11:40:39 -- nvmf/common.sh@295 -- # e810=() 00:17:11.647 11:40:39 -- nvmf/common.sh@295 -- # local -ga e810 00:17:11.647 11:40:39 -- nvmf/common.sh@296 -- # x722=() 00:17:11.647 11:40:39 -- nvmf/common.sh@296 -- # local -ga x722 00:17:11.647 11:40:39 -- nvmf/common.sh@297 -- # mlx=() 00:17:11.647 11:40:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:11.647 11:40:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:11.647 11:40:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:11.647 11:40:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:11.647 11:40:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:11.647 11:40:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:11.647 11:40:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:11.647 11:40:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:11.647 11:40:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:11.647 11:40:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:11.647 11:40:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:11.647 11:40:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:11.647 11:40:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:11.647 11:40:39 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:11.647 11:40:39 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:11.647 11:40:39 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:11.647 11:40:39 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:11.647 11:40:39 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:11.647 11:40:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:11.647 11:40:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:11.647 11:40:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:11.647 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:11.647 11:40:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:11.647 11:40:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:11.647 11:40:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:11.647 11:40:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:11.647 11:40:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:11.647 11:40:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:11.647 11:40:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:11.647 11:40:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:11.647 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:11.647 11:40:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:11.647 11:40:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:11.647 11:40:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:11.647 11:40:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:11.647 11:40:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:11.647 11:40:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:11.647 11:40:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:11.647 11:40:39 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:11.647 11:40:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:11.647 11:40:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.647 11:40:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:11.647 11:40:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.648 11:40:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:11.648 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:11.648 11:40:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.648 11:40:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:11.648 11:40:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.648 11:40:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:11.648 11:40:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.648 11:40:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:11.648 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:11.648 11:40:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.648 11:40:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:11.648 11:40:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:11.648 11:40:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:11.648 11:40:39 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:11.648 11:40:39 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:11.648 11:40:39 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:11.648 11:40:39 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:11.648 11:40:39 -- nvmf/common.sh@57 -- # uname 00:17:11.648 11:40:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:11.648 11:40:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:11.648 11:40:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:11.648 11:40:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:11.648 11:40:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:11.648 11:40:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:11.648 11:40:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:11.648 11:40:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:11.648 11:40:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:11.648 11:40:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:11.648 11:40:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:11.648 11:40:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:11.648 11:40:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:11.648 11:40:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:11.648 11:40:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:11.648 11:40:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:11.648 11:40:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:11.648 11:40:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.648 11:40:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:11.648 11:40:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:11.648 11:40:39 -- nvmf/common.sh@104 -- # continue 2 00:17:11.648 11:40:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:11.648 11:40:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.648 11:40:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:11.648 11:40:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.648 11:40:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:11.648 11:40:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:11.648 11:40:39 -- nvmf/common.sh@104 -- # continue 2 00:17:11.648 11:40:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:11.648 11:40:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:11.648 11:40:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:11.648 11:40:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:11.648 11:40:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:11.648 11:40:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:11.648 11:40:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:11.648 11:40:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:11.648 11:40:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:11.648 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:11.648 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:11.648 altname enp217s0f0np0 00:17:11.648 altname ens818f0np0 00:17:11.648 inet 192.168.100.8/24 scope global mlx_0_0 00:17:11.648 valid_lft forever preferred_lft forever 00:17:11.648 11:40:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:11.648 11:40:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:11.648 11:40:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:11.648 11:40:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:11.648 11:40:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:11.648 11:40:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:11.648 11:40:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:11.648 11:40:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:11.648 11:40:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:11.648 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:11.648 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:11.648 altname enp217s0f1np1 00:17:11.648 altname ens818f1np1 00:17:11.648 inet 192.168.100.9/24 scope global mlx_0_1 00:17:11.648 valid_lft forever preferred_lft forever 00:17:11.648 11:40:39 -- nvmf/common.sh@410 -- # return 0 00:17:11.648 11:40:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:11.648 11:40:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:11.648 11:40:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:11.648 11:40:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:11.648 11:40:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:11.648 11:40:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:11.648 11:40:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:11.648 11:40:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:11.648 11:40:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:11.648 11:40:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:11.648 11:40:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:11.648 11:40:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.648 11:40:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:11.648 11:40:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:11.648 11:40:39 -- nvmf/common.sh@104 -- # continue 2 00:17:11.648 11:40:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:11.648 11:40:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.648 11:40:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:11.648 11:40:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.648 11:40:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:11.648 11:40:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:11.648 11:40:39 -- nvmf/common.sh@104 -- # continue 2 00:17:11.648 11:40:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:11.648 11:40:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:11.648 11:40:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:11.648 11:40:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:11.648 11:40:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:11.648 11:40:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:11.648 11:40:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:11.648 11:40:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:11.648 11:40:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:11.648 11:40:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:11.648 11:40:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:11.648 11:40:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:11.648 11:40:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:11.648 192.168.100.9' 00:17:11.648 11:40:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:11.648 192.168.100.9' 00:17:11.648 11:40:39 -- nvmf/common.sh@445 -- # head -n 1 00:17:11.648 11:40:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:11.648 11:40:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:11.648 192.168.100.9' 00:17:11.648 11:40:39 -- nvmf/common.sh@446 -- # tail -n +2 00:17:11.648 11:40:39 -- nvmf/common.sh@446 -- # head -n 1 00:17:11.648 11:40:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:11.648 11:40:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:11.648 11:40:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:11.648 11:40:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:11.648 11:40:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:11.648 11:40:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:11.648 11:40:39 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:11.648 11:40:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:11.648 11:40:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:11.648 11:40:39 -- common/autotest_common.sh@10 -- # set +x 00:17:11.648 11:40:39 -- nvmf/common.sh@469 -- # nvmfpid=2335460 00:17:11.648 11:40:39 -- nvmf/common.sh@470 -- # waitforlisten 2335460 00:17:11.648 11:40:39 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:11.648 11:40:39 -- common/autotest_common.sh@819 -- # '[' -z 2335460 ']' 00:17:11.648 11:40:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.648 11:40:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:11.648 11:40:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.648 11:40:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:11.648 11:40:39 -- common/autotest_common.sh@10 -- # set +x 00:17:11.648 [2024-07-21 11:40:39.617312] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:11.648 [2024-07-21 11:40:39.617360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.648 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.648 [2024-07-21 11:40:39.701391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:11.648 [2024-07-21 11:40:39.740709] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:11.648 [2024-07-21 11:40:39.740815] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.648 [2024-07-21 11:40:39.740825] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.648 [2024-07-21 11:40:39.740834] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.648 [2024-07-21 11:40:39.740874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.648 [2024-07-21 11:40:39.740987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.648 [2024-07-21 11:40:39.741015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:11.648 [2024-07-21 11:40:39.741017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.648 11:40:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:11.648 11:40:40 -- common/autotest_common.sh@852 -- # return 0 00:17:11.648 11:40:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:11.648 11:40:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:11.648 11:40:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.648 11:40:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.648 11:40:40 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:11.648 11:40:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.648 11:40:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.648 [2024-07-21 11:40:40.494209] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17fa4b0/0x17fe9a0) succeed. 00:17:11.648 [2024-07-21 11:40:40.504644] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17fbaa0/0x1840030) succeed. 00:17:11.648 11:40:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.649 11:40:40 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:11.649 11:40:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.649 11:40:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.649 Malloc0 00:17:11.649 11:40:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.649 11:40:40 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:11.649 11:40:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.649 11:40:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.649 Malloc1 00:17:11.649 11:40:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.649 11:40:40 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:11.649 11:40:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.649 11:40:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.649 11:40:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.649 11:40:40 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:11.649 11:40:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.649 11:40:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.649 11:40:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.649 11:40:40 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:11.649 11:40:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.649 11:40:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.649 11:40:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.649 11:40:40 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:11.649 11:40:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.649 11:40:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.649 [2024-07-21 11:40:40.703552] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:11.649 11:40:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.649 11:40:40 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:11.649 11:40:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.649 11:40:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.649 11:40:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.649 11:40:40 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:17:11.649 00:17:11.649 Discovery Log Number of Records 2, Generation counter 2 00:17:11.649 =====Discovery Log Entry 0====== 00:17:11.649 trtype: rdma 00:17:11.649 adrfam: ipv4 00:17:11.649 subtype: current discovery subsystem 00:17:11.649 treq: not required 00:17:11.649 portid: 0 00:17:11.649 trsvcid: 4420 00:17:11.649 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:11.649 traddr: 192.168.100.8 00:17:11.649 eflags: explicit discovery connections, duplicate discovery information 00:17:11.649 rdma_prtype: not specified 00:17:11.649 rdma_qptype: connected 00:17:11.649 rdma_cms: rdma-cm 00:17:11.649 rdma_pkey: 0x0000 00:17:11.649 =====Discovery Log Entry 1====== 00:17:11.649 trtype: rdma 00:17:11.649 adrfam: ipv4 00:17:11.649 subtype: nvme subsystem 00:17:11.649 treq: not required 00:17:11.649 portid: 0 00:17:11.649 trsvcid: 4420 00:17:11.649 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:11.649 traddr: 192.168.100.8 00:17:11.649 eflags: none 00:17:11.649 rdma_prtype: not specified 00:17:11.649 rdma_qptype: connected 00:17:11.649 rdma_cms: rdma-cm 00:17:11.649 rdma_pkey: 0x0000 00:17:11.649 11:40:40 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:11.649 11:40:40 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:11.649 11:40:40 -- nvmf/common.sh@510 -- # local dev _ 00:17:11.649 11:40:40 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:11.649 11:40:40 -- nvmf/common.sh@509 -- # nvme list 00:17:11.649 11:40:40 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:11.649 11:40:40 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:11.649 11:40:40 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:11.649 11:40:40 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:11.649 11:40:40 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:11.649 11:40:40 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:12.579 11:40:41 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:12.579 11:40:41 -- common/autotest_common.sh@1177 -- # local i=0 00:17:12.579 11:40:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:12.579 11:40:41 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:17:12.579 11:40:41 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:17:12.579 11:40:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:14.503 11:40:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:14.503 11:40:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:14.503 11:40:43 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:17:14.503 11:40:43 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:17:14.503 11:40:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:14.503 11:40:43 -- common/autotest_common.sh@1187 -- # return 0 00:17:14.503 11:40:43 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:14.503 11:40:43 -- nvmf/common.sh@510 -- # local dev _ 00:17:14.503 11:40:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:14.503 11:40:43 -- nvmf/common.sh@509 -- # nvme list 00:17:14.503 11:40:43 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:14.503 11:40:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:14.503 11:40:43 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:14.503 11:40:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:14.503 11:40:43 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:14.503 11:40:43 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:14.503 11:40:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:14.503 11:40:43 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:14.503 11:40:43 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:14.503 11:40:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:14.503 11:40:43 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:17:14.503 /dev/nvme0n1 ]] 00:17:14.503 11:40:43 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:14.503 11:40:43 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:14.503 11:40:43 -- nvmf/common.sh@510 -- # local dev _ 00:17:14.503 11:40:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:14.503 11:40:43 -- nvmf/common.sh@509 -- # nvme list 00:17:14.503 11:40:43 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:14.503 11:40:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:14.503 11:40:43 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:14.503 11:40:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:14.503 11:40:43 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:14.503 11:40:43 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:14.503 11:40:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:14.503 11:40:43 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:14.503 11:40:43 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:14.503 11:40:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:14.503 11:40:43 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:14.503 11:40:43 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:15.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.434 11:40:44 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:15.434 11:40:44 -- common/autotest_common.sh@1198 -- # local i=0 00:17:15.434 11:40:44 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:17:15.434 11:40:44 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.434 11:40:44 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:17:15.434 11:40:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.691 11:40:44 -- common/autotest_common.sh@1210 -- # return 0 00:17:15.691 11:40:44 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:15.691 11:40:44 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:15.691 11:40:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.691 11:40:44 -- common/autotest_common.sh@10 -- # set +x 00:17:15.691 11:40:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:15.691 11:40:44 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:15.691 11:40:44 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:15.691 11:40:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:15.691 11:40:44 -- nvmf/common.sh@116 -- # sync 00:17:15.691 11:40:44 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:15.691 11:40:44 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:15.691 11:40:44 -- nvmf/common.sh@119 -- # set +e 00:17:15.691 11:40:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:15.691 11:40:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:15.691 rmmod nvme_rdma 00:17:15.691 rmmod nvme_fabrics 00:17:15.691 11:40:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:15.691 11:40:44 -- nvmf/common.sh@123 -- # set -e 00:17:15.691 11:40:44 -- nvmf/common.sh@124 -- # return 0 00:17:15.691 11:40:44 -- nvmf/common.sh@477 -- # '[' -n 2335460 ']' 00:17:15.691 11:40:44 -- nvmf/common.sh@478 -- # killprocess 2335460 00:17:15.691 11:40:44 -- common/autotest_common.sh@926 -- # '[' -z 2335460 ']' 00:17:15.691 11:40:44 -- common/autotest_common.sh@930 -- # kill -0 2335460 00:17:15.691 11:40:44 -- common/autotest_common.sh@931 -- # uname 00:17:15.691 11:40:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:15.691 11:40:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2335460 00:17:15.691 11:40:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:15.691 11:40:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:15.691 11:40:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2335460' 00:17:15.691 killing process with pid 2335460 00:17:15.691 11:40:44 -- common/autotest_common.sh@945 -- # kill 2335460 00:17:15.691 11:40:44 -- common/autotest_common.sh@950 -- # wait 2335460 00:17:15.949 11:40:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:15.949 11:40:45 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:15.949 00:17:15.949 real 0m14.393s 00:17:15.949 user 0m24.287s 00:17:15.949 sys 0m7.165s 00:17:15.949 11:40:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:15.949 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:17:15.949 ************************************ 00:17:15.949 END TEST nvmf_nvme_cli 00:17:15.949 ************************************ 00:17:15.949 11:40:45 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:17:15.949 11:40:45 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:15.949 11:40:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:15.949 11:40:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:15.949 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:17:15.949 ************************************ 00:17:15.949 START TEST nvmf_host_management 00:17:15.949 ************************************ 00:17:15.949 11:40:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:16.206 * Looking for test storage... 00:17:16.206 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:16.206 11:40:45 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:16.206 11:40:45 -- nvmf/common.sh@7 -- # uname -s 00:17:16.206 11:40:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.206 11:40:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.206 11:40:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.206 11:40:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.206 11:40:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.206 11:40:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.206 11:40:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.206 11:40:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.206 11:40:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.206 11:40:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.206 11:40:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:16.206 11:40:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:16.206 11:40:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.206 11:40:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.206 11:40:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:16.206 11:40:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:16.206 11:40:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.206 11:40:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.206 11:40:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.207 11:40:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.207 11:40:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.207 11:40:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.207 11:40:45 -- paths/export.sh@5 -- # export PATH 00:17:16.207 11:40:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.207 11:40:45 -- nvmf/common.sh@46 -- # : 0 00:17:16.207 11:40:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:16.207 11:40:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:16.207 11:40:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:16.207 11:40:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.207 11:40:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.207 11:40:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:16.207 11:40:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:16.207 11:40:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:16.207 11:40:45 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:16.207 11:40:45 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:16.207 11:40:45 -- target/host_management.sh@104 -- # nvmftestinit 00:17:16.207 11:40:45 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:16.207 11:40:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.207 11:40:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:16.207 11:40:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:16.207 11:40:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:16.207 11:40:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.207 11:40:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.207 11:40:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.207 11:40:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:16.207 11:40:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:16.207 11:40:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:16.207 11:40:45 -- common/autotest_common.sh@10 -- # set +x 00:17:24.309 11:40:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:24.309 11:40:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:24.309 11:40:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:24.309 11:40:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:24.309 11:40:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:24.309 11:40:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:24.309 11:40:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:24.309 11:40:53 -- nvmf/common.sh@294 -- # net_devs=() 00:17:24.309 11:40:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:24.309 11:40:53 -- nvmf/common.sh@295 -- # e810=() 00:17:24.309 11:40:53 -- nvmf/common.sh@295 -- # local -ga e810 00:17:24.309 11:40:53 -- nvmf/common.sh@296 -- # x722=() 00:17:24.309 11:40:53 -- nvmf/common.sh@296 -- # local -ga x722 00:17:24.309 11:40:53 -- nvmf/common.sh@297 -- # mlx=() 00:17:24.309 11:40:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:24.309 11:40:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:24.309 11:40:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:24.309 11:40:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:24.309 11:40:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:24.309 11:40:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:24.309 11:40:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:24.309 11:40:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:24.309 11:40:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:24.309 11:40:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:24.309 11:40:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:24.309 11:40:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:24.309 11:40:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:24.309 11:40:53 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:24.309 11:40:53 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:24.309 11:40:53 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:24.309 11:40:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:24.309 11:40:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:24.309 11:40:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:24.309 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:24.309 11:40:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:24.309 11:40:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:24.309 11:40:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:24.309 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:24.309 11:40:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:24.309 11:40:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:24.309 11:40:53 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:24.309 11:40:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.309 11:40:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:24.309 11:40:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.309 11:40:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:24.309 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:24.309 11:40:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.309 11:40:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:24.309 11:40:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.309 11:40:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:24.309 11:40:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.309 11:40:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:24.309 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:24.309 11:40:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.309 11:40:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:24.309 11:40:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:24.309 11:40:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:24.309 11:40:53 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:24.309 11:40:53 -- nvmf/common.sh@57 -- # uname 00:17:24.309 11:40:53 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:24.309 11:40:53 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:24.309 11:40:53 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:24.309 11:40:53 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:24.309 11:40:53 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:24.309 11:40:53 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:24.309 11:40:53 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:24.309 11:40:53 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:24.309 11:40:53 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:24.309 11:40:53 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:24.309 11:40:53 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:24.309 11:40:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:24.309 11:40:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:24.309 11:40:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:24.309 11:40:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:24.309 11:40:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:24.309 11:40:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:24.309 11:40:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.309 11:40:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:24.309 11:40:53 -- nvmf/common.sh@104 -- # continue 2 00:17:24.309 11:40:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:24.309 11:40:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.309 11:40:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.309 11:40:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:24.309 11:40:53 -- nvmf/common.sh@104 -- # continue 2 00:17:24.309 11:40:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:24.309 11:40:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:24.309 11:40:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:24.309 11:40:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:24.309 11:40:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:24.309 11:40:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:24.309 11:40:53 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:24.309 11:40:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:24.309 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:24.309 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:24.309 altname enp217s0f0np0 00:17:24.309 altname ens818f0np0 00:17:24.309 inet 192.168.100.8/24 scope global mlx_0_0 00:17:24.309 valid_lft forever preferred_lft forever 00:17:24.309 11:40:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:24.309 11:40:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:24.309 11:40:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:24.309 11:40:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:24.309 11:40:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:24.309 11:40:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:24.309 11:40:53 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:24.309 11:40:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:24.309 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:24.309 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:24.309 altname enp217s0f1np1 00:17:24.309 altname ens818f1np1 00:17:24.309 inet 192.168.100.9/24 scope global mlx_0_1 00:17:24.309 valid_lft forever preferred_lft forever 00:17:24.309 11:40:53 -- nvmf/common.sh@410 -- # return 0 00:17:24.309 11:40:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:24.309 11:40:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:24.309 11:40:53 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:24.309 11:40:53 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:24.309 11:40:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:24.309 11:40:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:24.309 11:40:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:24.309 11:40:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:24.309 11:40:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:24.309 11:40:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:24.309 11:40:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.309 11:40:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:24.309 11:40:53 -- nvmf/common.sh@104 -- # continue 2 00:17:24.309 11:40:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:24.309 11:40:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.309 11:40:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.309 11:40:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:24.309 11:40:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:24.309 11:40:53 -- nvmf/common.sh@104 -- # continue 2 00:17:24.309 11:40:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:24.309 11:40:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:24.309 11:40:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:24.309 11:40:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:24.309 11:40:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:24.309 11:40:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:24.309 11:40:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:24.309 11:40:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:24.309 11:40:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:24.309 11:40:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:24.309 11:40:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:24.309 11:40:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:24.309 11:40:53 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:24.309 192.168.100.9' 00:17:24.309 11:40:53 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:24.309 192.168.100.9' 00:17:24.309 11:40:53 -- nvmf/common.sh@445 -- # head -n 1 00:17:24.309 11:40:53 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:24.309 11:40:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:24.309 192.168.100.9' 00:17:24.309 11:40:53 -- nvmf/common.sh@446 -- # tail -n +2 00:17:24.309 11:40:53 -- nvmf/common.sh@446 -- # head -n 1 00:17:24.309 11:40:53 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:24.309 11:40:53 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:24.309 11:40:53 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:24.309 11:40:53 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:24.309 11:40:53 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:24.309 11:40:53 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:24.309 11:40:53 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:24.309 11:40:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:24.309 11:40:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:24.309 11:40:53 -- common/autotest_common.sh@10 -- # set +x 00:17:24.309 ************************************ 00:17:24.309 START TEST nvmf_host_management 00:17:24.309 ************************************ 00:17:24.309 11:40:53 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:17:24.309 11:40:53 -- target/host_management.sh@69 -- # starttarget 00:17:24.309 11:40:53 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:24.309 11:40:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:24.309 11:40:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:24.309 11:40:53 -- common/autotest_common.sh@10 -- # set +x 00:17:24.309 11:40:53 -- nvmf/common.sh@469 -- # nvmfpid=2340407 00:17:24.309 11:40:53 -- nvmf/common.sh@470 -- # waitforlisten 2340407 00:17:24.309 11:40:53 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:24.309 11:40:53 -- common/autotest_common.sh@819 -- # '[' -z 2340407 ']' 00:17:24.309 11:40:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.309 11:40:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:24.309 11:40:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.309 11:40:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:24.309 11:40:53 -- common/autotest_common.sh@10 -- # set +x 00:17:24.309 [2024-07-21 11:40:53.546412] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:24.309 [2024-07-21 11:40:53.546469] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.309 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.309 [2024-07-21 11:40:53.634059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:24.309 [2024-07-21 11:40:53.672499] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:24.309 [2024-07-21 11:40:53.672611] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.309 [2024-07-21 11:40:53.672622] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.309 [2024-07-21 11:40:53.672636] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.309 [2024-07-21 11:40:53.672681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.309 [2024-07-21 11:40:53.672764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:24.309 [2024-07-21 11:40:53.672855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.309 [2024-07-21 11:40:53.672856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:25.280 11:40:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:25.280 11:40:54 -- common/autotest_common.sh@852 -- # return 0 00:17:25.280 11:40:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:25.280 11:40:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:25.280 11:40:54 -- common/autotest_common.sh@10 -- # set +x 00:17:25.280 11:40:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.280 11:40:54 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:25.280 11:40:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:25.280 11:40:54 -- common/autotest_common.sh@10 -- # set +x 00:17:25.280 [2024-07-21 11:40:54.416070] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xae27a0/0xae6c90) succeed. 00:17:25.280 [2024-07-21 11:40:54.426221] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xae3d90/0xb28320) succeed. 00:17:25.280 11:40:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:25.280 11:40:54 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:25.280 11:40:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:25.280 11:40:54 -- common/autotest_common.sh@10 -- # set +x 00:17:25.280 11:40:54 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:25.280 11:40:54 -- target/host_management.sh@23 -- # cat 00:17:25.280 11:40:54 -- target/host_management.sh@30 -- # rpc_cmd 00:17:25.280 11:40:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:25.280 11:40:54 -- common/autotest_common.sh@10 -- # set +x 00:17:25.280 Malloc0 00:17:25.280 [2024-07-21 11:40:54.604976] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:25.280 11:40:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:25.280 11:40:54 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:25.280 11:40:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:25.280 11:40:54 -- common/autotest_common.sh@10 -- # set +x 00:17:25.280 11:40:54 -- target/host_management.sh@73 -- # perfpid=2340663 00:17:25.280 11:40:54 -- target/host_management.sh@74 -- # waitforlisten 2340663 /var/tmp/bdevperf.sock 00:17:25.280 11:40:54 -- common/autotest_common.sh@819 -- # '[' -z 2340663 ']' 00:17:25.280 11:40:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:25.280 11:40:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:25.280 11:40:54 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:25.280 11:40:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:25.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:25.280 11:40:54 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:25.280 11:40:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:25.280 11:40:54 -- nvmf/common.sh@520 -- # config=() 00:17:25.280 11:40:54 -- common/autotest_common.sh@10 -- # set +x 00:17:25.280 11:40:54 -- nvmf/common.sh@520 -- # local subsystem config 00:17:25.280 11:40:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:25.280 11:40:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:25.280 { 00:17:25.280 "params": { 00:17:25.280 "name": "Nvme$subsystem", 00:17:25.280 "trtype": "$TEST_TRANSPORT", 00:17:25.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:25.280 "adrfam": "ipv4", 00:17:25.280 "trsvcid": "$NVMF_PORT", 00:17:25.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:25.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:25.280 "hdgst": ${hdgst:-false}, 00:17:25.280 "ddgst": ${ddgst:-false} 00:17:25.280 }, 00:17:25.280 "method": "bdev_nvme_attach_controller" 00:17:25.280 } 00:17:25.280 EOF 00:17:25.280 )") 00:17:25.280 11:40:54 -- nvmf/common.sh@542 -- # cat 00:17:25.280 11:40:54 -- nvmf/common.sh@544 -- # jq . 00:17:25.280 11:40:54 -- nvmf/common.sh@545 -- # IFS=, 00:17:25.280 11:40:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:25.280 "params": { 00:17:25.280 "name": "Nvme0", 00:17:25.280 "trtype": "rdma", 00:17:25.280 "traddr": "192.168.100.8", 00:17:25.280 "adrfam": "ipv4", 00:17:25.280 "trsvcid": "4420", 00:17:25.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:25.280 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:25.280 "hdgst": false, 00:17:25.280 "ddgst": false 00:17:25.280 }, 00:17:25.280 "method": "bdev_nvme_attach_controller" 00:17:25.280 }' 00:17:25.537 [2024-07-21 11:40:54.704454] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:25.537 [2024-07-21 11:40:54.704504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2340663 ] 00:17:25.537 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.537 [2024-07-21 11:40:54.792660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.537 [2024-07-21 11:40:54.829197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.794 Running I/O for 10 seconds... 00:17:26.359 11:40:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:26.359 11:40:55 -- common/autotest_common.sh@852 -- # return 0 00:17:26.359 11:40:55 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:26.359 11:40:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:26.359 11:40:55 -- common/autotest_common.sh@10 -- # set +x 00:17:26.359 11:40:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:26.359 11:40:55 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:26.359 11:40:55 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:26.359 11:40:55 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:26.359 11:40:55 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:26.359 11:40:55 -- target/host_management.sh@52 -- # local ret=1 00:17:26.359 11:40:55 -- target/host_management.sh@53 -- # local i 00:17:26.359 11:40:55 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:26.359 11:40:55 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:26.359 11:40:55 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:26.359 11:40:55 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:26.359 11:40:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:26.359 11:40:55 -- common/autotest_common.sh@10 -- # set +x 00:17:26.359 11:40:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:26.359 11:40:55 -- target/host_management.sh@55 -- # read_io_count=3037 00:17:26.359 11:40:55 -- target/host_management.sh@58 -- # '[' 3037 -ge 100 ']' 00:17:26.359 11:40:55 -- target/host_management.sh@59 -- # ret=0 00:17:26.359 11:40:55 -- target/host_management.sh@60 -- # break 00:17:26.359 11:40:55 -- target/host_management.sh@64 -- # return 0 00:17:26.359 11:40:55 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:26.359 11:40:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:26.359 11:40:55 -- common/autotest_common.sh@10 -- # set +x 00:17:26.359 11:40:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:26.359 11:40:55 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:26.359 11:40:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:26.359 11:40:55 -- common/autotest_common.sh@10 -- # set +x 00:17:26.359 11:40:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:26.359 11:40:55 -- target/host_management.sh@87 -- # sleep 1 00:17:27.292 [2024-07-21 11:40:56.595563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182400 00:17:27.292 [2024-07-21 11:40:56.595600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182500 00:17:27.292 [2024-07-21 11:40:56.595632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182600 00:17:27.292 [2024-07-21 11:40:56.595657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182400 00:17:27.292 [2024-07-21 11:40:56.595678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182600 00:17:27.292 [2024-07-21 11:40:56.595698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182000 00:17:27.292 [2024-07-21 11:40:56.595718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182700 00:17:27.292 [2024-07-21 11:40:56.595738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182500 00:17:27.292 [2024-07-21 11:40:56.595758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182600 00:17:27.292 [2024-07-21 11:40:56.595778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182600 00:17:27.292 [2024-07-21 11:40:56.595798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182600 00:17:27.292 [2024-07-21 11:40:56.595818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182400 00:17:27.292 [2024-07-21 11:40:56.595837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182600 00:17:27.292 [2024-07-21 11:40:56.595857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182700 00:17:27.292 [2024-07-21 11:40:56.595879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182400 00:17:27.292 [2024-07-21 11:40:56.595899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182700 00:17:27.292 [2024-07-21 11:40:56.595919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182600 00:17:27.292 [2024-07-21 11:40:56.595939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182000 00:17:27.292 [2024-07-21 11:40:56.595959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182400 00:17:27.292 [2024-07-21 11:40:56.595979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.595990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182400 00:17:27.292 [2024-07-21 11:40:56.595999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.596010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182500 00:17:27.292 [2024-07-21 11:40:56.596020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.596030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182500 00:17:27.292 [2024-07-21 11:40:56.596039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.596049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182700 00:17:27.292 [2024-07-21 11:40:56.596059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.596069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182700 00:17:27.292 [2024-07-21 11:40:56.596078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.292 [2024-07-21 11:40:56.596089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182500 00:17:27.293 [2024-07-21 11:40:56.596100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182400 00:17:27.293 [2024-07-21 11:40:56.596119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182700 00:17:27.293 [2024-07-21 11:40:56.596139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182700 00:17:27.293 [2024-07-21 11:40:56.596159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c336000 len:0x10000 key:0x182300 00:17:27.293 [2024-07-21 11:40:56.596181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c357000 len:0x10000 key:0x182300 00:17:27.293 [2024-07-21 11:40:56.596201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c378000 len:0x10000 key:0x182300 00:17:27.293 [2024-07-21 11:40:56.596221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c399000 len:0x10000 key:0x182300 00:17:27.293 [2024-07-21 11:40:56.596240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3ba000 len:0x10000 key:0x182300 00:17:27.293 [2024-07-21 11:40:56.596260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3db000 len:0x10000 key:0x182300 00:17:27.293 [2024-07-21 11:40:56.596280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3fc000 len:0x10000 key:0x182300 00:17:27.293 [2024-07-21 11:40:56.596301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bded000 len:0x10000 key:0x182300 00:17:27.293 [2024-07-21 11:40:56.596321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf58000 len:0x10000 key:0x182300 00:17:27.293 [2024-07-21 11:40:56.596344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf37000 len:0x10000 key:0x182300 00:17:27.293 [2024-07-21 11:40:56.596363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf16000 len:0x10000 key:0x182300 00:17:27.293 [2024-07-21 11:40:56.596384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bef5000 len:0x10000 key:0x182300 00:17:27.293 [2024-07-21 11:40:56.596404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bed4000 len:0x10000 key:0x182300 00:17:27.293 [2024-07-21 11:40:56.596425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000beb3000 len:0x10000 key:0x182300 00:17:27.293 [2024-07-21 11:40:56.596445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be92000 len:0x10000 key:0x182300 00:17:27.293 [2024-07-21 11:40:56.596465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be71000 len:0x10000 key:0x182300 00:17:27.293 [2024-07-21 11:40:56.596484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000baf6000 len:0x10000 key:0x182300 00:17:27.293 [2024-07-21 11:40:56.596505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182600 00:17:27.293 [2024-07-21 11:40:56.596525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182700 00:17:27.293 [2024-07-21 11:40:56.596545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182700 00:17:27.293 [2024-07-21 11:40:56.596568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182500 00:17:27.293 [2024-07-21 11:40:56.596587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182700 00:17:27.293 [2024-07-21 11:40:56.596607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182600 00:17:27.293 [2024-07-21 11:40:56.596630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182600 00:17:27.293 [2024-07-21 11:40:56.596652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182500 00:17:27.293 [2024-07-21 11:40:56.596672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182700 00:17:27.293 [2024-07-21 11:40:56.596692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182600 00:17:27.293 [2024-07-21 11:40:56.596712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182700 00:17:27.293 [2024-07-21 11:40:56.596731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182700 00:17:27.293 [2024-07-21 11:40:56.596751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182500 00:17:27.293 [2024-07-21 11:40:56.596771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182700 00:17:27.293 [2024-07-21 11:40:56.596792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182400 00:17:27.293 [2024-07-21 11:40:56.596812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182500 00:17:27.293 [2024-07-21 11:40:56.596832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182500 00:17:27.293 [2024-07-21 11:40:56.596851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182500 00:17:27.293 [2024-07-21 11:40:56.596871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.293 [2024-07-21 11:40:56.596881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182000 00:17:27.294 [2024-07-21 11:40:56.596892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c3b2000 sqhd:5310 p:0 m:0 dnr:0 00:17:27.294 [2024-07-21 11:40:56.598715] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192015c0 was disconnected and freed. reset controller. 00:17:27.294 [2024-07-21 11:40:56.599589] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:27.294 task offset: 27008 on job bdev=Nvme0n1 fails 00:17:27.294 00:17:27.294 Latency(us) 00:17:27.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.294 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:27.294 Job: Nvme0n1 ended in about 1.60 seconds with error 00:17:27.294 Verification LBA range: start 0x0 length 0x400 00:17:27.294 Nvme0n1 : 1.60 2052.80 128.30 40.02 0.00 30386.56 3303.01 1020054.73 00:17:27.294 =================================================================================================================== 00:17:27.294 Total : 2052.80 128.30 40.02 0.00 30386.56 3303.01 1020054.73 00:17:27.294 [2024-07-21 11:40:56.601255] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:27.294 11:40:56 -- target/host_management.sh@91 -- # kill -9 2340663 00:17:27.294 11:40:56 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:27.294 11:40:56 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:27.294 11:40:56 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:27.294 11:40:56 -- nvmf/common.sh@520 -- # config=() 00:17:27.294 11:40:56 -- nvmf/common.sh@520 -- # local subsystem config 00:17:27.294 11:40:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:27.294 11:40:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:27.294 { 00:17:27.294 "params": { 00:17:27.294 "name": "Nvme$subsystem", 00:17:27.294 "trtype": "$TEST_TRANSPORT", 00:17:27.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:27.294 "adrfam": "ipv4", 00:17:27.294 "trsvcid": "$NVMF_PORT", 00:17:27.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:27.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:27.294 "hdgst": ${hdgst:-false}, 00:17:27.294 "ddgst": ${ddgst:-false} 00:17:27.294 }, 00:17:27.294 "method": "bdev_nvme_attach_controller" 00:17:27.294 } 00:17:27.294 EOF 00:17:27.294 )") 00:17:27.294 11:40:56 -- nvmf/common.sh@542 -- # cat 00:17:27.294 11:40:56 -- nvmf/common.sh@544 -- # jq . 00:17:27.294 11:40:56 -- nvmf/common.sh@545 -- # IFS=, 00:17:27.294 11:40:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:27.294 "params": { 00:17:27.294 "name": "Nvme0", 00:17:27.294 "trtype": "rdma", 00:17:27.294 "traddr": "192.168.100.8", 00:17:27.294 "adrfam": "ipv4", 00:17:27.294 "trsvcid": "4420", 00:17:27.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:27.294 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:27.294 "hdgst": false, 00:17:27.294 "ddgst": false 00:17:27.294 }, 00:17:27.294 "method": "bdev_nvme_attach_controller" 00:17:27.294 }' 00:17:27.294 [2024-07-21 11:40:56.658982] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:27.294 [2024-07-21 11:40:56.659036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2340985 ] 00:17:27.294 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.550 [2024-07-21 11:40:56.748258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.550 [2024-07-21 11:40:56.785159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.550 Running I/O for 1 seconds... 00:17:28.920 00:17:28.920 Latency(us) 00:17:28.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.920 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:28.920 Verification LBA range: start 0x0 length 0x400 00:17:28.920 Nvme0n1 : 1.01 5570.54 348.16 0.00 0.00 11314.89 593.10 24641.54 00:17:28.920 =================================================================================================================== 00:17:28.920 Total : 5570.54 348.16 0.00 0.00 11314.89 593.10 24641.54 00:17:28.920 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2340663 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:17:28.920 11:40:58 -- target/host_management.sh@101 -- # stoptarget 00:17:28.920 11:40:58 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:28.920 11:40:58 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:28.920 11:40:58 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:28.920 11:40:58 -- target/host_management.sh@40 -- # nvmftestfini 00:17:28.920 11:40:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:28.920 11:40:58 -- nvmf/common.sh@116 -- # sync 00:17:28.920 11:40:58 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:28.920 11:40:58 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:28.920 11:40:58 -- nvmf/common.sh@119 -- # set +e 00:17:28.920 11:40:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:28.920 11:40:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:28.920 rmmod nvme_rdma 00:17:28.920 rmmod nvme_fabrics 00:17:28.920 11:40:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:28.920 11:40:58 -- nvmf/common.sh@123 -- # set -e 00:17:28.920 11:40:58 -- nvmf/common.sh@124 -- # return 0 00:17:28.920 11:40:58 -- nvmf/common.sh@477 -- # '[' -n 2340407 ']' 00:17:28.920 11:40:58 -- nvmf/common.sh@478 -- # killprocess 2340407 00:17:28.920 11:40:58 -- common/autotest_common.sh@926 -- # '[' -z 2340407 ']' 00:17:28.920 11:40:58 -- common/autotest_common.sh@930 -- # kill -0 2340407 00:17:28.920 11:40:58 -- common/autotest_common.sh@931 -- # uname 00:17:28.920 11:40:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:28.920 11:40:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2340407 00:17:28.920 11:40:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:28.920 11:40:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:28.920 11:40:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2340407' 00:17:28.920 killing process with pid 2340407 00:17:28.920 11:40:58 -- common/autotest_common.sh@945 -- # kill 2340407 00:17:28.920 11:40:58 -- common/autotest_common.sh@950 -- # wait 2340407 00:17:29.178 [2024-07-21 11:40:58.510571] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:29.178 11:40:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:29.178 11:40:58 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:29.178 00:17:29.178 real 0m5.040s 00:17:29.178 user 0m22.522s 00:17:29.178 sys 0m1.112s 00:17:29.178 11:40:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:29.178 11:40:58 -- common/autotest_common.sh@10 -- # set +x 00:17:29.178 ************************************ 00:17:29.178 END TEST nvmf_host_management 00:17:29.178 ************************************ 00:17:29.178 11:40:58 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:29.178 00:17:29.178 real 0m13.279s 00:17:29.178 user 0m24.756s 00:17:29.178 sys 0m7.337s 00:17:29.178 11:40:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:29.178 11:40:58 -- common/autotest_common.sh@10 -- # set +x 00:17:29.178 ************************************ 00:17:29.178 END TEST nvmf_host_management 00:17:29.178 ************************************ 00:17:29.437 11:40:58 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:29.437 11:40:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:29.437 11:40:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:29.437 11:40:58 -- common/autotest_common.sh@10 -- # set +x 00:17:29.437 ************************************ 00:17:29.437 START TEST nvmf_lvol 00:17:29.437 ************************************ 00:17:29.437 11:40:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:29.437 * Looking for test storage... 00:17:29.437 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:29.437 11:40:58 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:29.437 11:40:58 -- nvmf/common.sh@7 -- # uname -s 00:17:29.437 11:40:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.437 11:40:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.437 11:40:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.437 11:40:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.437 11:40:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.437 11:40:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.437 11:40:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.437 11:40:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.437 11:40:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.437 11:40:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.437 11:40:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:29.437 11:40:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:29.437 11:40:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.437 11:40:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.437 11:40:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:29.437 11:40:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:29.437 11:40:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.437 11:40:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.437 11:40:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.437 11:40:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.437 11:40:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.437 11:40:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.437 11:40:58 -- paths/export.sh@5 -- # export PATH 00:17:29.437 11:40:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.437 11:40:58 -- nvmf/common.sh@46 -- # : 0 00:17:29.437 11:40:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:29.437 11:40:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:29.437 11:40:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:29.437 11:40:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.437 11:40:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.437 11:40:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:29.437 11:40:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:29.437 11:40:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:29.437 11:40:58 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:29.437 11:40:58 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:29.437 11:40:58 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:29.437 11:40:58 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:29.437 11:40:58 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:29.437 11:40:58 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:29.437 11:40:58 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:29.437 11:40:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.437 11:40:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:29.437 11:40:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:29.437 11:40:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:29.437 11:40:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.437 11:40:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:29.437 11:40:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.437 11:40:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:29.437 11:40:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:29.437 11:40:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:29.437 11:40:58 -- common/autotest_common.sh@10 -- # set +x 00:17:37.549 11:41:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:37.549 11:41:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:37.549 11:41:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:37.549 11:41:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:37.549 11:41:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:37.549 11:41:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:37.549 11:41:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:37.549 11:41:06 -- nvmf/common.sh@294 -- # net_devs=() 00:17:37.549 11:41:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:37.549 11:41:06 -- nvmf/common.sh@295 -- # e810=() 00:17:37.549 11:41:06 -- nvmf/common.sh@295 -- # local -ga e810 00:17:37.549 11:41:06 -- nvmf/common.sh@296 -- # x722=() 00:17:37.549 11:41:06 -- nvmf/common.sh@296 -- # local -ga x722 00:17:37.549 11:41:06 -- nvmf/common.sh@297 -- # mlx=() 00:17:37.549 11:41:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:37.549 11:41:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.549 11:41:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.549 11:41:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.549 11:41:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.549 11:41:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.549 11:41:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.549 11:41:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.549 11:41:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.549 11:41:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.549 11:41:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.549 11:41:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.549 11:41:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:37.549 11:41:06 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:37.549 11:41:06 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:37.549 11:41:06 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:37.549 11:41:06 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:37.549 11:41:06 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:37.549 11:41:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:37.549 11:41:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:37.549 11:41:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:37.549 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:37.549 11:41:06 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:37.549 11:41:06 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:37.549 11:41:06 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:37.549 11:41:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:37.549 11:41:06 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:37.549 11:41:06 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:37.549 11:41:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:37.549 11:41:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:37.549 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:37.549 11:41:06 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:37.549 11:41:06 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:37.549 11:41:06 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:37.549 11:41:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:37.549 11:41:06 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:37.549 11:41:06 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:37.549 11:41:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:37.549 11:41:06 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:37.549 11:41:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:37.549 11:41:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.549 11:41:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:37.549 11:41:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.549 11:41:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:37.549 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:37.549 11:41:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.549 11:41:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:37.549 11:41:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.549 11:41:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:37.549 11:41:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.549 11:41:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:37.549 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:37.550 11:41:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.550 11:41:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:37.550 11:41:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:37.550 11:41:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:37.550 11:41:06 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:37.550 11:41:06 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:37.550 11:41:06 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:37.550 11:41:06 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:37.550 11:41:06 -- nvmf/common.sh@57 -- # uname 00:17:37.550 11:41:06 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:37.550 11:41:06 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:37.550 11:41:06 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:37.550 11:41:06 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:37.550 11:41:06 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:37.550 11:41:06 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:37.550 11:41:06 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:37.550 11:41:06 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:37.550 11:41:06 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:37.550 11:41:06 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:37.550 11:41:06 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:37.550 11:41:06 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:37.550 11:41:06 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:37.550 11:41:06 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:37.550 11:41:06 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:37.550 11:41:06 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:37.550 11:41:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:37.550 11:41:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:37.550 11:41:06 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:37.550 11:41:06 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:37.550 11:41:06 -- nvmf/common.sh@104 -- # continue 2 00:17:37.550 11:41:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:37.550 11:41:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:37.550 11:41:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:37.550 11:41:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:37.550 11:41:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:37.550 11:41:06 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:37.550 11:41:06 -- nvmf/common.sh@104 -- # continue 2 00:17:37.550 11:41:06 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:37.550 11:41:06 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:37.550 11:41:06 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:37.550 11:41:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:37.550 11:41:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:37.550 11:41:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:37.550 11:41:06 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:37.550 11:41:06 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:37.550 11:41:06 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:37.550 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:37.550 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:37.550 altname enp217s0f0np0 00:17:37.550 altname ens818f0np0 00:17:37.550 inet 192.168.100.8/24 scope global mlx_0_0 00:17:37.550 valid_lft forever preferred_lft forever 00:17:37.550 11:41:06 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:37.550 11:41:06 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:37.550 11:41:06 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:37.550 11:41:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:37.550 11:41:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:37.550 11:41:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:37.550 11:41:06 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:37.550 11:41:06 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:37.550 11:41:06 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:37.550 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:37.550 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:37.550 altname enp217s0f1np1 00:17:37.550 altname ens818f1np1 00:17:37.550 inet 192.168.100.9/24 scope global mlx_0_1 00:17:37.550 valid_lft forever preferred_lft forever 00:17:37.550 11:41:06 -- nvmf/common.sh@410 -- # return 0 00:17:37.550 11:41:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:37.550 11:41:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:37.550 11:41:06 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:37.550 11:41:06 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:37.550 11:41:06 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:37.550 11:41:06 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:37.550 11:41:06 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:37.808 11:41:06 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:37.808 11:41:06 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:37.808 11:41:06 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:37.808 11:41:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:37.808 11:41:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:37.808 11:41:06 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:37.808 11:41:06 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:37.808 11:41:06 -- nvmf/common.sh@104 -- # continue 2 00:17:37.808 11:41:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:37.808 11:41:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:37.808 11:41:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:37.808 11:41:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:37.808 11:41:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:37.808 11:41:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:37.808 11:41:07 -- nvmf/common.sh@104 -- # continue 2 00:17:37.808 11:41:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:37.808 11:41:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:37.808 11:41:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:37.808 11:41:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:37.809 11:41:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:37.809 11:41:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:37.809 11:41:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:37.809 11:41:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:37.809 11:41:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:37.809 11:41:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:37.809 11:41:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:37.809 11:41:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:37.809 11:41:07 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:37.809 192.168.100.9' 00:17:37.809 11:41:07 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:37.809 192.168.100.9' 00:17:37.809 11:41:07 -- nvmf/common.sh@445 -- # head -n 1 00:17:37.809 11:41:07 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:37.809 11:41:07 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:37.809 192.168.100.9' 00:17:37.809 11:41:07 -- nvmf/common.sh@446 -- # tail -n +2 00:17:37.809 11:41:07 -- nvmf/common.sh@446 -- # head -n 1 00:17:37.809 11:41:07 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:37.809 11:41:07 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:37.809 11:41:07 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:37.809 11:41:07 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:37.809 11:41:07 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:37.809 11:41:07 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:37.809 11:41:07 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:37.809 11:41:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:37.809 11:41:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:37.809 11:41:07 -- common/autotest_common.sh@10 -- # set +x 00:17:37.809 11:41:07 -- nvmf/common.sh@469 -- # nvmfpid=2345426 00:17:37.809 11:41:07 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:37.809 11:41:07 -- nvmf/common.sh@470 -- # waitforlisten 2345426 00:17:37.809 11:41:07 -- common/autotest_common.sh@819 -- # '[' -z 2345426 ']' 00:17:37.809 11:41:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.809 11:41:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:37.809 11:41:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.809 11:41:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:37.809 11:41:07 -- common/autotest_common.sh@10 -- # set +x 00:17:37.809 [2024-07-21 11:41:07.124876] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:37.809 [2024-07-21 11:41:07.124928] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.809 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.809 [2024-07-21 11:41:07.211229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:38.067 [2024-07-21 11:41:07.249178] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:38.067 [2024-07-21 11:41:07.249309] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.067 [2024-07-21 11:41:07.249319] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.067 [2024-07-21 11:41:07.249329] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.067 [2024-07-21 11:41:07.249380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.067 [2024-07-21 11:41:07.249406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.067 [2024-07-21 11:41:07.249410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.632 11:41:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:38.632 11:41:07 -- common/autotest_common.sh@852 -- # return 0 00:17:38.632 11:41:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:38.632 11:41:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:38.632 11:41:07 -- common/autotest_common.sh@10 -- # set +x 00:17:38.632 11:41:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.632 11:41:07 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:38.889 [2024-07-21 11:41:08.135681] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa809d0/0xa84ec0) succeed. 00:17:38.889 [2024-07-21 11:41:08.145818] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa81f20/0xac6550) succeed. 00:17:38.889 11:41:08 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:39.146 11:41:08 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:39.146 11:41:08 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:39.403 11:41:08 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:39.403 11:41:08 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:39.403 11:41:08 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:39.661 11:41:09 -- target/nvmf_lvol.sh@29 -- # lvs=d9ede84f-652d-450d-8e50-2a1806a5dc5a 00:17:39.661 11:41:09 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d9ede84f-652d-450d-8e50-2a1806a5dc5a lvol 20 00:17:39.918 11:41:09 -- target/nvmf_lvol.sh@32 -- # lvol=a3629e21-797b-43b1-aceb-f345f92e5902 00:17:39.918 11:41:09 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:40.175 11:41:09 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a3629e21-797b-43b1-aceb-f345f92e5902 00:17:40.175 11:41:09 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:40.432 [2024-07-21 11:41:09.672466] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:40.432 11:41:09 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:40.688 11:41:09 -- target/nvmf_lvol.sh@42 -- # perf_pid=2345998 00:17:40.688 11:41:09 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:40.688 11:41:09 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:40.688 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.619 11:41:10 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a3629e21-797b-43b1-aceb-f345f92e5902 MY_SNAPSHOT 00:17:41.877 11:41:11 -- target/nvmf_lvol.sh@47 -- # snapshot=152df82c-6b0f-4ba9-99fd-4815543e56a1 00:17:41.877 11:41:11 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a3629e21-797b-43b1-aceb-f345f92e5902 30 00:17:41.877 11:41:11 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 152df82c-6b0f-4ba9-99fd-4815543e56a1 MY_CLONE 00:17:42.134 11:41:11 -- target/nvmf_lvol.sh@49 -- # clone=de046b81-de19-4da8-aa64-519d865b70a7 00:17:42.134 11:41:11 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate de046b81-de19-4da8-aa64-519d865b70a7 00:17:42.391 11:41:11 -- target/nvmf_lvol.sh@53 -- # wait 2345998 00:17:52.364 Initializing NVMe Controllers 00:17:52.364 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:17:52.364 Controller IO queue size 128, less than required. 00:17:52.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:52.364 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:52.364 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:52.364 Initialization complete. Launching workers. 00:17:52.364 ======================================================== 00:17:52.364 Latency(us) 00:17:52.364 Device Information : IOPS MiB/s Average min max 00:17:52.364 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16910.40 66.06 7571.78 2312.93 43107.80 00:17:52.364 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16880.40 65.94 7584.53 2940.85 36633.76 00:17:52.364 ======================================================== 00:17:52.364 Total : 33790.80 132.00 7578.15 2312.93 43107.80 00:17:52.364 00:17:52.364 11:41:21 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:52.364 11:41:21 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a3629e21-797b-43b1-aceb-f345f92e5902 00:17:52.364 11:41:21 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d9ede84f-652d-450d-8e50-2a1806a5dc5a 00:17:52.638 11:41:21 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:52.638 11:41:21 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:52.638 11:41:21 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:52.638 11:41:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:52.638 11:41:21 -- nvmf/common.sh@116 -- # sync 00:17:52.638 11:41:21 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:52.638 11:41:21 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:52.638 11:41:21 -- nvmf/common.sh@119 -- # set +e 00:17:52.638 11:41:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:52.638 11:41:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:52.638 rmmod nvme_rdma 00:17:52.638 rmmod nvme_fabrics 00:17:52.638 11:41:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:52.638 11:41:21 -- nvmf/common.sh@123 -- # set -e 00:17:52.638 11:41:21 -- nvmf/common.sh@124 -- # return 0 00:17:52.638 11:41:21 -- nvmf/common.sh@477 -- # '[' -n 2345426 ']' 00:17:52.638 11:41:21 -- nvmf/common.sh@478 -- # killprocess 2345426 00:17:52.638 11:41:21 -- common/autotest_common.sh@926 -- # '[' -z 2345426 ']' 00:17:52.638 11:41:21 -- common/autotest_common.sh@930 -- # kill -0 2345426 00:17:52.638 11:41:21 -- common/autotest_common.sh@931 -- # uname 00:17:52.638 11:41:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:52.638 11:41:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2345426 00:17:52.638 11:41:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:52.638 11:41:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:52.638 11:41:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2345426' 00:17:52.638 killing process with pid 2345426 00:17:52.638 11:41:21 -- common/autotest_common.sh@945 -- # kill 2345426 00:17:52.638 11:41:21 -- common/autotest_common.sh@950 -- # wait 2345426 00:17:52.896 11:41:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:52.896 11:41:22 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:52.896 00:17:52.896 real 0m23.537s 00:17:52.896 user 1m11.201s 00:17:52.896 sys 0m7.639s 00:17:52.896 11:41:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:52.896 11:41:22 -- common/autotest_common.sh@10 -- # set +x 00:17:52.896 ************************************ 00:17:52.896 END TEST nvmf_lvol 00:17:52.896 ************************************ 00:17:52.896 11:41:22 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:17:52.896 11:41:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:52.896 11:41:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:52.896 11:41:22 -- common/autotest_common.sh@10 -- # set +x 00:17:52.896 ************************************ 00:17:52.896 START TEST nvmf_lvs_grow 00:17:52.896 ************************************ 00:17:52.896 11:41:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:17:52.896 * Looking for test storage... 00:17:53.154 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:53.154 11:41:22 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.154 11:41:22 -- nvmf/common.sh@7 -- # uname -s 00:17:53.154 11:41:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.154 11:41:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.154 11:41:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.154 11:41:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.154 11:41:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.154 11:41:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.154 11:41:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.154 11:41:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.154 11:41:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.154 11:41:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.154 11:41:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:53.154 11:41:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:53.154 11:41:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.154 11:41:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.154 11:41:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.154 11:41:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:53.154 11:41:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.154 11:41:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.154 11:41:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.154 11:41:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.154 11:41:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.154 11:41:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.154 11:41:22 -- paths/export.sh@5 -- # export PATH 00:17:53.154 11:41:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.154 11:41:22 -- nvmf/common.sh@46 -- # : 0 00:17:53.154 11:41:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:53.154 11:41:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:53.154 11:41:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:53.154 11:41:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.154 11:41:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.154 11:41:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:53.154 11:41:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:53.154 11:41:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:53.154 11:41:22 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:53.154 11:41:22 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:53.154 11:41:22 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:53.154 11:41:22 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:53.154 11:41:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.154 11:41:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:53.154 11:41:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:53.154 11:41:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:53.154 11:41:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.154 11:41:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.154 11:41:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.154 11:41:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:53.154 11:41:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:53.154 11:41:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:53.154 11:41:22 -- common/autotest_common.sh@10 -- # set +x 00:18:01.259 11:41:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:01.259 11:41:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:01.259 11:41:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:01.259 11:41:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:01.259 11:41:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:01.259 11:41:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:01.259 11:41:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:01.259 11:41:30 -- nvmf/common.sh@294 -- # net_devs=() 00:18:01.259 11:41:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:01.259 11:41:30 -- nvmf/common.sh@295 -- # e810=() 00:18:01.259 11:41:30 -- nvmf/common.sh@295 -- # local -ga e810 00:18:01.259 11:41:30 -- nvmf/common.sh@296 -- # x722=() 00:18:01.259 11:41:30 -- nvmf/common.sh@296 -- # local -ga x722 00:18:01.259 11:41:30 -- nvmf/common.sh@297 -- # mlx=() 00:18:01.259 11:41:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:01.259 11:41:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:01.259 11:41:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:01.259 11:41:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:01.259 11:41:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:01.259 11:41:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:01.259 11:41:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:01.259 11:41:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:01.259 11:41:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:01.259 11:41:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:01.259 11:41:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:01.259 11:41:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:01.259 11:41:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:01.259 11:41:30 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:01.259 11:41:30 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:01.259 11:41:30 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:01.259 11:41:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:01.259 11:41:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:01.259 11:41:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:01.259 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:01.259 11:41:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:01.259 11:41:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:01.259 11:41:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:01.259 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:01.259 11:41:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:01.259 11:41:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:01.259 11:41:30 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:01.259 11:41:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.259 11:41:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:01.259 11:41:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.259 11:41:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:01.259 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:01.259 11:41:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.259 11:41:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:01.259 11:41:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.259 11:41:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:01.259 11:41:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.259 11:41:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:01.259 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:01.259 11:41:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.259 11:41:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:01.259 11:41:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:01.259 11:41:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:01.259 11:41:30 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:01.259 11:41:30 -- nvmf/common.sh@57 -- # uname 00:18:01.259 11:41:30 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:01.259 11:41:30 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:01.259 11:41:30 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:01.259 11:41:30 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:01.259 11:41:30 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:01.259 11:41:30 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:01.259 11:41:30 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:01.259 11:41:30 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:01.259 11:41:30 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:01.259 11:41:30 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:01.259 11:41:30 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:01.259 11:41:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:01.259 11:41:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:01.259 11:41:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:01.259 11:41:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:01.259 11:41:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:01.259 11:41:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:01.259 11:41:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.259 11:41:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:01.259 11:41:30 -- nvmf/common.sh@104 -- # continue 2 00:18:01.259 11:41:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:01.259 11:41:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.259 11:41:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.259 11:41:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:01.259 11:41:30 -- nvmf/common.sh@104 -- # continue 2 00:18:01.259 11:41:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:01.259 11:41:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:01.259 11:41:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:01.259 11:41:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:01.259 11:41:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:01.259 11:41:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:01.259 11:41:30 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:01.259 11:41:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:01.259 11:41:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:01.259 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:01.259 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:01.259 altname enp217s0f0np0 00:18:01.259 altname ens818f0np0 00:18:01.259 inet 192.168.100.8/24 scope global mlx_0_0 00:18:01.259 valid_lft forever preferred_lft forever 00:18:01.259 11:41:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:01.259 11:41:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:01.259 11:41:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:01.259 11:41:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:01.259 11:41:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:01.259 11:41:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:01.259 11:41:30 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:01.259 11:41:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:01.260 11:41:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:01.260 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:01.260 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:01.260 altname enp217s0f1np1 00:18:01.260 altname ens818f1np1 00:18:01.260 inet 192.168.100.9/24 scope global mlx_0_1 00:18:01.260 valid_lft forever preferred_lft forever 00:18:01.260 11:41:30 -- nvmf/common.sh@410 -- # return 0 00:18:01.260 11:41:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:01.260 11:41:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:01.260 11:41:30 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:01.260 11:41:30 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:01.260 11:41:30 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:01.260 11:41:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:01.260 11:41:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:01.260 11:41:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:01.260 11:41:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:01.260 11:41:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:01.260 11:41:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:01.260 11:41:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.260 11:41:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:01.260 11:41:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:01.260 11:41:30 -- nvmf/common.sh@104 -- # continue 2 00:18:01.260 11:41:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:01.260 11:41:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.260 11:41:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:01.260 11:41:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.260 11:41:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:01.260 11:41:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:01.260 11:41:30 -- nvmf/common.sh@104 -- # continue 2 00:18:01.260 11:41:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:01.260 11:41:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:01.260 11:41:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:01.260 11:41:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:01.260 11:41:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:01.260 11:41:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:01.260 11:41:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:01.260 11:41:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:01.260 11:41:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:01.260 11:41:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:01.260 11:41:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:01.260 11:41:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:01.260 11:41:30 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:01.260 192.168.100.9' 00:18:01.260 11:41:30 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:01.260 192.168.100.9' 00:18:01.260 11:41:30 -- nvmf/common.sh@445 -- # head -n 1 00:18:01.260 11:41:30 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:01.260 11:41:30 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:01.260 192.168.100.9' 00:18:01.260 11:41:30 -- nvmf/common.sh@446 -- # tail -n +2 00:18:01.260 11:41:30 -- nvmf/common.sh@446 -- # head -n 1 00:18:01.260 11:41:30 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:01.260 11:41:30 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:01.260 11:41:30 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:01.260 11:41:30 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:01.260 11:41:30 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:01.260 11:41:30 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:01.260 11:41:30 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:01.260 11:41:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:01.260 11:41:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:01.260 11:41:30 -- common/autotest_common.sh@10 -- # set +x 00:18:01.260 11:41:30 -- nvmf/common.sh@469 -- # nvmfpid=2352080 00:18:01.260 11:41:30 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:01.260 11:41:30 -- nvmf/common.sh@470 -- # waitforlisten 2352080 00:18:01.260 11:41:30 -- common/autotest_common.sh@819 -- # '[' -z 2352080 ']' 00:18:01.260 11:41:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.260 11:41:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:01.260 11:41:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.260 11:41:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:01.260 11:41:30 -- common/autotest_common.sh@10 -- # set +x 00:18:01.260 [2024-07-21 11:41:30.556984] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:01.260 [2024-07-21 11:41:30.557036] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.260 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.260 [2024-07-21 11:41:30.641254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.260 [2024-07-21 11:41:30.678660] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:01.260 [2024-07-21 11:41:30.678774] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.260 [2024-07-21 11:41:30.678785] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.260 [2024-07-21 11:41:30.678794] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.260 [2024-07-21 11:41:30.678818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.191 11:41:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:02.191 11:41:31 -- common/autotest_common.sh@852 -- # return 0 00:18:02.191 11:41:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:02.191 11:41:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:02.191 11:41:31 -- common/autotest_common.sh@10 -- # set +x 00:18:02.191 11:41:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.191 11:41:31 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:02.191 [2024-07-21 11:41:31.565308] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c7f320/0x1c83810) succeed. 00:18:02.191 [2024-07-21 11:41:31.574379] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c80820/0x1cc4ea0) succeed. 00:18:02.462 11:41:31 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:02.462 11:41:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:02.462 11:41:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:02.462 11:41:31 -- common/autotest_common.sh@10 -- # set +x 00:18:02.462 ************************************ 00:18:02.462 START TEST lvs_grow_clean 00:18:02.462 ************************************ 00:18:02.462 11:41:31 -- common/autotest_common.sh@1104 -- # lvs_grow 00:18:02.462 11:41:31 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:02.462 11:41:31 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:02.462 11:41:31 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:02.462 11:41:31 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:02.462 11:41:31 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:02.462 11:41:31 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:02.462 11:41:31 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:02.462 11:41:31 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:02.462 11:41:31 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:02.462 11:41:31 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:02.462 11:41:31 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:02.719 11:41:32 -- target/nvmf_lvs_grow.sh@28 -- # lvs=eb68fd86-beb3-4c35-bab8-731db00506ea 00:18:02.719 11:41:32 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb68fd86-beb3-4c35-bab8-731db00506ea 00:18:02.719 11:41:32 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:02.976 11:41:32 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:02.976 11:41:32 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:02.976 11:41:32 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eb68fd86-beb3-4c35-bab8-731db00506ea lvol 150 00:18:02.976 11:41:32 -- target/nvmf_lvs_grow.sh@33 -- # lvol=491e2a9b-3c7d-4721-a2e4-a017106d4d4e 00:18:02.976 11:41:32 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:02.976 11:41:32 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:03.245 [2024-07-21 11:41:32.509245] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:03.245 [2024-07-21 11:41:32.509302] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:03.245 true 00:18:03.245 11:41:32 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb68fd86-beb3-4c35-bab8-731db00506ea 00:18:03.245 11:41:32 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:03.503 11:41:32 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:03.503 11:41:32 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:03.503 11:41:32 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 491e2a9b-3c7d-4721-a2e4-a017106d4d4e 00:18:03.760 11:41:33 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:03.760 [2024-07-21 11:41:33.147317] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:03.760 11:41:33 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:04.017 11:41:33 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2352606 00:18:04.017 11:41:33 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:04.017 11:41:33 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2352606 /var/tmp/bdevperf.sock 00:18:04.017 11:41:33 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:04.017 11:41:33 -- common/autotest_common.sh@819 -- # '[' -z 2352606 ']' 00:18:04.017 11:41:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.017 11:41:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:04.017 11:41:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.017 11:41:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:04.017 11:41:33 -- common/autotest_common.sh@10 -- # set +x 00:18:04.017 [2024-07-21 11:41:33.343351] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:04.017 [2024-07-21 11:41:33.343402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352606 ] 00:18:04.017 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.017 [2024-07-21 11:41:33.427022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.274 [2024-07-21 11:41:33.464192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.839 11:41:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:04.839 11:41:34 -- common/autotest_common.sh@852 -- # return 0 00:18:04.839 11:41:34 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:05.096 Nvme0n1 00:18:05.096 11:41:34 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:05.353 [ 00:18:05.353 { 00:18:05.353 "name": "Nvme0n1", 00:18:05.353 "aliases": [ 00:18:05.353 "491e2a9b-3c7d-4721-a2e4-a017106d4d4e" 00:18:05.353 ], 00:18:05.353 "product_name": "NVMe disk", 00:18:05.353 "block_size": 4096, 00:18:05.353 "num_blocks": 38912, 00:18:05.353 "uuid": "491e2a9b-3c7d-4721-a2e4-a017106d4d4e", 00:18:05.353 "assigned_rate_limits": { 00:18:05.353 "rw_ios_per_sec": 0, 00:18:05.353 "rw_mbytes_per_sec": 0, 00:18:05.353 "r_mbytes_per_sec": 0, 00:18:05.353 "w_mbytes_per_sec": 0 00:18:05.353 }, 00:18:05.353 "claimed": false, 00:18:05.353 "zoned": false, 00:18:05.353 "supported_io_types": { 00:18:05.353 "read": true, 00:18:05.353 "write": true, 00:18:05.353 "unmap": true, 00:18:05.353 "write_zeroes": true, 00:18:05.353 "flush": true, 00:18:05.353 "reset": true, 00:18:05.353 "compare": true, 00:18:05.353 "compare_and_write": true, 00:18:05.353 "abort": true, 00:18:05.353 "nvme_admin": true, 00:18:05.353 "nvme_io": true 00:18:05.353 }, 00:18:05.353 "memory_domains": [ 00:18:05.353 { 00:18:05.353 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:05.353 "dma_device_type": 0 00:18:05.353 } 00:18:05.353 ], 00:18:05.353 "driver_specific": { 00:18:05.353 "nvme": [ 00:18:05.353 { 00:18:05.353 "trid": { 00:18:05.353 "trtype": "RDMA", 00:18:05.353 "adrfam": "IPv4", 00:18:05.353 "traddr": "192.168.100.8", 00:18:05.353 "trsvcid": "4420", 00:18:05.353 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:05.353 }, 00:18:05.353 "ctrlr_data": { 00:18:05.353 "cntlid": 1, 00:18:05.353 "vendor_id": "0x8086", 00:18:05.353 "model_number": "SPDK bdev Controller", 00:18:05.353 "serial_number": "SPDK0", 00:18:05.353 "firmware_revision": "24.01.1", 00:18:05.353 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:05.353 "oacs": { 00:18:05.353 "security": 0, 00:18:05.353 "format": 0, 00:18:05.353 "firmware": 0, 00:18:05.353 "ns_manage": 0 00:18:05.353 }, 00:18:05.353 "multi_ctrlr": true, 00:18:05.353 "ana_reporting": false 00:18:05.353 }, 00:18:05.353 "vs": { 00:18:05.353 "nvme_version": "1.3" 00:18:05.353 }, 00:18:05.353 "ns_data": { 00:18:05.353 "id": 1, 00:18:05.353 "can_share": true 00:18:05.353 } 00:18:05.353 } 00:18:05.353 ], 00:18:05.353 "mp_policy": "active_passive" 00:18:05.353 } 00:18:05.353 } 00:18:05.353 ] 00:18:05.353 11:41:34 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2352720 00:18:05.353 11:41:34 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:05.353 11:41:34 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:05.353 Running I/O for 10 seconds... 00:18:06.286 Latency(us) 00:18:06.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:06.286 Nvme0n1 : 1.00 36642.00 143.13 0.00 0.00 0.00 0.00 0.00 00:18:06.286 =================================================================================================================== 00:18:06.286 Total : 36642.00 143.13 0.00 0.00 0.00 0.00 0.00 00:18:06.286 00:18:07.215 11:41:36 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u eb68fd86-beb3-4c35-bab8-731db00506ea 00:18:07.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:07.471 Nvme0n1 : 2.00 37008.00 144.56 0.00 0.00 0.00 0.00 0.00 00:18:07.471 =================================================================================================================== 00:18:07.471 Total : 37008.00 144.56 0.00 0.00 0.00 0.00 0.00 00:18:07.471 00:18:07.471 true 00:18:07.471 11:41:36 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb68fd86-beb3-4c35-bab8-731db00506ea 00:18:07.471 11:41:36 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:07.471 11:41:36 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:07.471 11:41:36 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:07.472 11:41:36 -- target/nvmf_lvs_grow.sh@65 -- # wait 2352720 00:18:08.404 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:08.404 Nvme0n1 : 3.00 37120.00 145.00 0.00 0.00 0.00 0.00 0.00 00:18:08.404 =================================================================================================================== 00:18:08.404 Total : 37120.00 145.00 0.00 0.00 0.00 0.00 0.00 00:18:08.404 00:18:09.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.340 Nvme0n1 : 4.00 37241.25 145.47 0.00 0.00 0.00 0.00 0.00 00:18:09.340 =================================================================================================================== 00:18:09.340 Total : 37241.25 145.47 0.00 0.00 0.00 0.00 0.00 00:18:09.340 00:18:10.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:10.280 Nvme0n1 : 5.00 37324.40 145.80 0.00 0.00 0.00 0.00 0.00 00:18:10.280 =================================================================================================================== 00:18:10.280 Total : 37324.40 145.80 0.00 0.00 0.00 0.00 0.00 00:18:10.280 00:18:11.648 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:11.648 Nvme0n1 : 6.00 37360.33 145.94 0.00 0.00 0.00 0.00 0.00 00:18:11.648 =================================================================================================================== 00:18:11.648 Total : 37360.33 145.94 0.00 0.00 0.00 0.00 0.00 00:18:11.648 00:18:12.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.580 Nvme0n1 : 7.00 37412.29 146.14 0.00 0.00 0.00 0.00 0.00 00:18:12.580 =================================================================================================================== 00:18:12.580 Total : 37412.29 146.14 0.00 0.00 0.00 0.00 0.00 00:18:12.580 00:18:13.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:13.509 Nvme0n1 : 8.00 37456.38 146.31 0.00 0.00 0.00 0.00 0.00 00:18:13.509 =================================================================================================================== 00:18:13.509 Total : 37456.38 146.31 0.00 0.00 0.00 0.00 0.00 00:18:13.509 00:18:14.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:14.438 Nvme0n1 : 9.00 37493.22 146.46 0.00 0.00 0.00 0.00 0.00 00:18:14.438 =================================================================================================================== 00:18:14.438 Total : 37493.22 146.46 0.00 0.00 0.00 0.00 0.00 00:18:14.438 00:18:15.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:15.368 Nvme0n1 : 10.00 37513.80 146.54 0.00 0.00 0.00 0.00 0.00 00:18:15.368 =================================================================================================================== 00:18:15.368 Total : 37513.80 146.54 0.00 0.00 0.00 0.00 0.00 00:18:15.368 00:18:15.368 00:18:15.368 Latency(us) 00:18:15.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:15.368 Nvme0n1 : 10.00 37514.81 146.54 0.00 0.00 3409.84 2542.80 13736.35 00:18:15.368 =================================================================================================================== 00:18:15.368 Total : 37514.81 146.54 0.00 0.00 3409.84 2542.80 13736.35 00:18:15.368 0 00:18:15.368 11:41:44 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2352606 00:18:15.368 11:41:44 -- common/autotest_common.sh@926 -- # '[' -z 2352606 ']' 00:18:15.368 11:41:44 -- common/autotest_common.sh@930 -- # kill -0 2352606 00:18:15.368 11:41:44 -- common/autotest_common.sh@931 -- # uname 00:18:15.368 11:41:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:15.368 11:41:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2352606 00:18:15.368 11:41:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:15.368 11:41:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:15.368 11:41:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2352606' 00:18:15.368 killing process with pid 2352606 00:18:15.368 11:41:44 -- common/autotest_common.sh@945 -- # kill 2352606 00:18:15.368 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.368 00:18:15.368 Latency(us) 00:18:15.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.368 =================================================================================================================== 00:18:15.368 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:15.369 11:41:44 -- common/autotest_common.sh@950 -- # wait 2352606 00:18:15.626 11:41:44 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:15.882 11:41:45 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb68fd86-beb3-4c35-bab8-731db00506ea 00:18:15.882 11:41:45 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:15.882 11:41:45 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:15.882 11:41:45 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:15.882 11:41:45 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:16.139 [2024-07-21 11:41:45.426414] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:16.139 11:41:45 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb68fd86-beb3-4c35-bab8-731db00506ea 00:18:16.139 11:41:45 -- common/autotest_common.sh@640 -- # local es=0 00:18:16.139 11:41:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb68fd86-beb3-4c35-bab8-731db00506ea 00:18:16.139 11:41:45 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:16.139 11:41:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:16.139 11:41:45 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:16.139 11:41:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:16.139 11:41:45 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:16.139 11:41:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:16.139 11:41:45 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:16.139 11:41:45 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:16.139 11:41:45 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb68fd86-beb3-4c35-bab8-731db00506ea 00:18:16.396 request: 00:18:16.396 { 00:18:16.396 "uuid": "eb68fd86-beb3-4c35-bab8-731db00506ea", 00:18:16.396 "method": "bdev_lvol_get_lvstores", 00:18:16.396 "req_id": 1 00:18:16.396 } 00:18:16.396 Got JSON-RPC error response 00:18:16.396 response: 00:18:16.396 { 00:18:16.396 "code": -19, 00:18:16.396 "message": "No such device" 00:18:16.396 } 00:18:16.396 11:41:45 -- common/autotest_common.sh@643 -- # es=1 00:18:16.396 11:41:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:16.396 11:41:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:16.396 11:41:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:16.396 11:41:45 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:16.396 aio_bdev 00:18:16.396 11:41:45 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 491e2a9b-3c7d-4721-a2e4-a017106d4d4e 00:18:16.396 11:41:45 -- common/autotest_common.sh@887 -- # local bdev_name=491e2a9b-3c7d-4721-a2e4-a017106d4d4e 00:18:16.396 11:41:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:16.396 11:41:45 -- common/autotest_common.sh@889 -- # local i 00:18:16.396 11:41:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:16.396 11:41:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:16.396 11:41:45 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:16.653 11:41:45 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 491e2a9b-3c7d-4721-a2e4-a017106d4d4e -t 2000 00:18:16.910 [ 00:18:16.910 { 00:18:16.910 "name": "491e2a9b-3c7d-4721-a2e4-a017106d4d4e", 00:18:16.910 "aliases": [ 00:18:16.910 "lvs/lvol" 00:18:16.910 ], 00:18:16.910 "product_name": "Logical Volume", 00:18:16.910 "block_size": 4096, 00:18:16.910 "num_blocks": 38912, 00:18:16.910 "uuid": "491e2a9b-3c7d-4721-a2e4-a017106d4d4e", 00:18:16.910 "assigned_rate_limits": { 00:18:16.910 "rw_ios_per_sec": 0, 00:18:16.910 "rw_mbytes_per_sec": 0, 00:18:16.910 "r_mbytes_per_sec": 0, 00:18:16.910 "w_mbytes_per_sec": 0 00:18:16.910 }, 00:18:16.910 "claimed": false, 00:18:16.910 "zoned": false, 00:18:16.910 "supported_io_types": { 00:18:16.910 "read": true, 00:18:16.910 "write": true, 00:18:16.910 "unmap": true, 00:18:16.910 "write_zeroes": true, 00:18:16.910 "flush": false, 00:18:16.910 "reset": true, 00:18:16.910 "compare": false, 00:18:16.910 "compare_and_write": false, 00:18:16.910 "abort": false, 00:18:16.910 "nvme_admin": false, 00:18:16.910 "nvme_io": false 00:18:16.910 }, 00:18:16.910 "driver_specific": { 00:18:16.910 "lvol": { 00:18:16.910 "lvol_store_uuid": "eb68fd86-beb3-4c35-bab8-731db00506ea", 00:18:16.910 "base_bdev": "aio_bdev", 00:18:16.910 "thin_provision": false, 00:18:16.910 "snapshot": false, 00:18:16.910 "clone": false, 00:18:16.910 "esnap_clone": false 00:18:16.910 } 00:18:16.910 } 00:18:16.910 } 00:18:16.910 ] 00:18:16.910 11:41:46 -- common/autotest_common.sh@895 -- # return 0 00:18:16.910 11:41:46 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb68fd86-beb3-4c35-bab8-731db00506ea 00:18:16.910 11:41:46 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:16.910 11:41:46 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:16.910 11:41:46 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb68fd86-beb3-4c35-bab8-731db00506ea 00:18:16.910 11:41:46 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:17.167 11:41:46 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:17.167 11:41:46 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 491e2a9b-3c7d-4721-a2e4-a017106d4d4e 00:18:17.424 11:41:46 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eb68fd86-beb3-4c35-bab8-731db00506ea 00:18:17.424 11:41:46 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:17.681 11:41:46 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:17.681 00:18:17.681 real 0m15.337s 00:18:17.681 user 0m15.178s 00:18:17.681 sys 0m1.212s 00:18:17.681 11:41:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:17.681 11:41:46 -- common/autotest_common.sh@10 -- # set +x 00:18:17.681 ************************************ 00:18:17.681 END TEST lvs_grow_clean 00:18:17.681 ************************************ 00:18:17.681 11:41:47 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:17.681 11:41:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:17.681 11:41:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:17.681 11:41:47 -- common/autotest_common.sh@10 -- # set +x 00:18:17.681 ************************************ 00:18:17.681 START TEST lvs_grow_dirty 00:18:17.681 ************************************ 00:18:17.681 11:41:47 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:18:17.681 11:41:47 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:17.681 11:41:47 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:17.681 11:41:47 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:17.681 11:41:47 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:17.681 11:41:47 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:17.681 11:41:47 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:17.681 11:41:47 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:17.681 11:41:47 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:17.681 11:41:47 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:17.938 11:41:47 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:17.938 11:41:47 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:18.195 11:41:47 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b7c86d3f-3402-4994-b0ce-00c65b7ccaea 00:18:18.195 11:41:47 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7c86d3f-3402-4994-b0ce-00c65b7ccaea 00:18:18.195 11:41:47 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:18.195 11:41:47 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:18.195 11:41:47 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:18.195 11:41:47 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b7c86d3f-3402-4994-b0ce-00c65b7ccaea lvol 150 00:18:18.453 11:41:47 -- target/nvmf_lvs_grow.sh@33 -- # lvol=f2cb9f92-1016-4ede-a366-83a2cf5e3716 00:18:18.453 11:41:47 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:18.453 11:41:47 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:18.711 [2024-07-21 11:41:47.887801] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:18.711 [2024-07-21 11:41:47.887855] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:18.711 true 00:18:18.711 11:41:47 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:18.711 11:41:47 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7c86d3f-3402-4994-b0ce-00c65b7ccaea 00:18:18.711 11:41:48 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:18.711 11:41:48 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:18.969 11:41:48 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f2cb9f92-1016-4ede-a366-83a2cf5e3716 00:18:18.969 11:41:48 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:19.226 11:41:48 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:19.484 11:41:48 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2355190 00:18:19.484 11:41:48 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:19.484 11:41:48 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:19.484 11:41:48 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2355190 /var/tmp/bdevperf.sock 00:18:19.484 11:41:48 -- common/autotest_common.sh@819 -- # '[' -z 2355190 ']' 00:18:19.484 11:41:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.484 11:41:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:19.484 11:41:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.484 11:41:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:19.484 11:41:48 -- common/autotest_common.sh@10 -- # set +x 00:18:19.484 [2024-07-21 11:41:48.775095] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:19.484 [2024-07-21 11:41:48.775148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2355190 ] 00:18:19.484 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.484 [2024-07-21 11:41:48.859468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.484 [2024-07-21 11:41:48.897927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.418 11:41:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:20.418 11:41:49 -- common/autotest_common.sh@852 -- # return 0 00:18:20.418 11:41:49 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:20.418 Nvme0n1 00:18:20.418 11:41:49 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:20.676 [ 00:18:20.676 { 00:18:20.676 "name": "Nvme0n1", 00:18:20.676 "aliases": [ 00:18:20.676 "f2cb9f92-1016-4ede-a366-83a2cf5e3716" 00:18:20.676 ], 00:18:20.676 "product_name": "NVMe disk", 00:18:20.676 "block_size": 4096, 00:18:20.676 "num_blocks": 38912, 00:18:20.676 "uuid": "f2cb9f92-1016-4ede-a366-83a2cf5e3716", 00:18:20.676 "assigned_rate_limits": { 00:18:20.676 "rw_ios_per_sec": 0, 00:18:20.676 "rw_mbytes_per_sec": 0, 00:18:20.676 "r_mbytes_per_sec": 0, 00:18:20.677 "w_mbytes_per_sec": 0 00:18:20.677 }, 00:18:20.677 "claimed": false, 00:18:20.677 "zoned": false, 00:18:20.677 "supported_io_types": { 00:18:20.677 "read": true, 00:18:20.677 "write": true, 00:18:20.677 "unmap": true, 00:18:20.677 "write_zeroes": true, 00:18:20.677 "flush": true, 00:18:20.677 "reset": true, 00:18:20.677 "compare": true, 00:18:20.677 "compare_and_write": true, 00:18:20.677 "abort": true, 00:18:20.677 "nvme_admin": true, 00:18:20.677 "nvme_io": true 00:18:20.677 }, 00:18:20.677 "memory_domains": [ 00:18:20.677 { 00:18:20.677 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:20.677 "dma_device_type": 0 00:18:20.677 } 00:18:20.677 ], 00:18:20.677 "driver_specific": { 00:18:20.677 "nvme": [ 00:18:20.677 { 00:18:20.677 "trid": { 00:18:20.677 "trtype": "RDMA", 00:18:20.677 "adrfam": "IPv4", 00:18:20.677 "traddr": "192.168.100.8", 00:18:20.677 "trsvcid": "4420", 00:18:20.677 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:20.677 }, 00:18:20.677 "ctrlr_data": { 00:18:20.677 "cntlid": 1, 00:18:20.677 "vendor_id": "0x8086", 00:18:20.677 "model_number": "SPDK bdev Controller", 00:18:20.677 "serial_number": "SPDK0", 00:18:20.677 "firmware_revision": "24.01.1", 00:18:20.677 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:20.677 "oacs": { 00:18:20.677 "security": 0, 00:18:20.677 "format": 0, 00:18:20.677 "firmware": 0, 00:18:20.677 "ns_manage": 0 00:18:20.677 }, 00:18:20.677 "multi_ctrlr": true, 00:18:20.677 "ana_reporting": false 00:18:20.677 }, 00:18:20.677 "vs": { 00:18:20.677 "nvme_version": "1.3" 00:18:20.677 }, 00:18:20.677 "ns_data": { 00:18:20.677 "id": 1, 00:18:20.677 "can_share": true 00:18:20.677 } 00:18:20.677 } 00:18:20.677 ], 00:18:20.677 "mp_policy": "active_passive" 00:18:20.677 } 00:18:20.677 } 00:18:20.677 ] 00:18:20.677 11:41:49 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2355438 00:18:20.677 11:41:49 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:20.677 11:41:49 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:20.677 Running I/O for 10 seconds... 00:18:22.050 Latency(us) 00:18:22.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:22.050 Nvme0n1 : 1.00 36421.00 142.27 0.00 0.00 0.00 0.00 0.00 00:18:22.050 =================================================================================================================== 00:18:22.050 Total : 36421.00 142.27 0.00 0.00 0.00 0.00 0.00 00:18:22.050 00:18:22.615 11:41:51 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b7c86d3f-3402-4994-b0ce-00c65b7ccaea 00:18:22.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:22.872 Nvme0n1 : 2.00 36911.00 144.18 0.00 0.00 0.00 0.00 0.00 00:18:22.872 =================================================================================================================== 00:18:22.872 Total : 36911.00 144.18 0.00 0.00 0.00 0.00 0.00 00:18:22.872 00:18:22.872 true 00:18:22.872 11:41:52 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7c86d3f-3402-4994-b0ce-00c65b7ccaea 00:18:22.872 11:41:52 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:23.130 11:41:52 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:23.130 11:41:52 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:23.130 11:41:52 -- target/nvmf_lvs_grow.sh@65 -- # wait 2355438 00:18:23.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:23.700 Nvme0n1 : 3.00 37046.67 144.71 0.00 0.00 0.00 0.00 0.00 00:18:23.700 =================================================================================================================== 00:18:23.700 Total : 37046.67 144.71 0.00 0.00 0.00 0.00 0.00 00:18:23.700 00:18:24.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:24.671 Nvme0n1 : 4.00 37177.00 145.22 0.00 0.00 0.00 0.00 0.00 00:18:24.671 =================================================================================================================== 00:18:24.671 Total : 37177.00 145.22 0.00 0.00 0.00 0.00 0.00 00:18:24.671 00:18:26.041 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:26.041 Nvme0n1 : 5.00 37274.00 145.60 0.00 0.00 0.00 0.00 0.00 00:18:26.041 =================================================================================================================== 00:18:26.041 Total : 37274.00 145.60 0.00 0.00 0.00 0.00 0.00 00:18:26.041 00:18:26.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:26.973 Nvme0n1 : 6.00 37354.33 145.92 0.00 0.00 0.00 0.00 0.00 00:18:26.973 =================================================================================================================== 00:18:26.973 Total : 37354.33 145.92 0.00 0.00 0.00 0.00 0.00 00:18:26.973 00:18:27.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:27.905 Nvme0n1 : 7.00 37403.71 146.11 0.00 0.00 0.00 0.00 0.00 00:18:27.905 =================================================================================================================== 00:18:27.905 Total : 37403.71 146.11 0.00 0.00 0.00 0.00 0.00 00:18:27.905 00:18:28.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:28.892 Nvme0n1 : 8.00 37448.50 146.28 0.00 0.00 0.00 0.00 0.00 00:18:28.892 =================================================================================================================== 00:18:28.892 Total : 37448.50 146.28 0.00 0.00 0.00 0.00 0.00 00:18:28.892 00:18:29.824 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:29.824 Nvme0n1 : 9.00 37479.56 146.40 0.00 0.00 0.00 0.00 0.00 00:18:29.824 =================================================================================================================== 00:18:29.824 Total : 37479.56 146.40 0.00 0.00 0.00 0.00 0.00 00:18:29.824 00:18:30.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:30.756 Nvme0n1 : 10.00 37488.10 146.44 0.00 0.00 0.00 0.00 0.00 00:18:30.756 =================================================================================================================== 00:18:30.756 Total : 37488.10 146.44 0.00 0.00 0.00 0.00 0.00 00:18:30.756 00:18:30.756 00:18:30.756 Latency(us) 00:18:30.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:30.757 Nvme0n1 : 10.00 37489.03 146.44 0.00 0.00 3411.94 2241.33 14050.92 00:18:30.757 =================================================================================================================== 00:18:30.757 Total : 37489.03 146.44 0.00 0.00 3411.94 2241.33 14050.92 00:18:30.757 0 00:18:30.757 11:42:00 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2355190 00:18:30.757 11:42:00 -- common/autotest_common.sh@926 -- # '[' -z 2355190 ']' 00:18:30.757 11:42:00 -- common/autotest_common.sh@930 -- # kill -0 2355190 00:18:30.757 11:42:00 -- common/autotest_common.sh@931 -- # uname 00:18:30.757 11:42:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:30.757 11:42:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2355190 00:18:30.757 11:42:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:30.757 11:42:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:30.757 11:42:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2355190' 00:18:30.757 killing process with pid 2355190 00:18:30.757 11:42:00 -- common/autotest_common.sh@945 -- # kill 2355190 00:18:30.757 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.757 00:18:30.757 Latency(us) 00:18:30.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.757 =================================================================================================================== 00:18:30.757 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.757 11:42:00 -- common/autotest_common.sh@950 -- # wait 2355190 00:18:31.014 11:42:00 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:31.271 11:42:00 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7c86d3f-3402-4994-b0ce-00c65b7ccaea 00:18:31.271 11:42:00 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:31.528 11:42:00 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:31.528 11:42:00 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:31.528 11:42:00 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 2352080 00:18:31.528 11:42:00 -- target/nvmf_lvs_grow.sh@74 -- # wait 2352080 00:18:31.528 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 2352080 Killed "${NVMF_APP[@]}" "$@" 00:18:31.528 11:42:00 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:31.528 11:42:00 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:31.528 11:42:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:31.528 11:42:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:31.528 11:42:00 -- common/autotest_common.sh@10 -- # set +x 00:18:31.528 11:42:00 -- nvmf/common.sh@469 -- # nvmfpid=2357380 00:18:31.528 11:42:00 -- nvmf/common.sh@470 -- # waitforlisten 2357380 00:18:31.529 11:42:00 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:31.529 11:42:00 -- common/autotest_common.sh@819 -- # '[' -z 2357380 ']' 00:18:31.529 11:42:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.529 11:42:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:31.529 11:42:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.529 11:42:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:31.529 11:42:00 -- common/autotest_common.sh@10 -- # set +x 00:18:31.529 [2024-07-21 11:42:00.814966] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:31.529 [2024-07-21 11:42:00.815019] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.529 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.529 [2024-07-21 11:42:00.902766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.529 [2024-07-21 11:42:00.939590] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:31.529 [2024-07-21 11:42:00.939701] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.529 [2024-07-21 11:42:00.939711] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.529 [2024-07-21 11:42:00.939721] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.529 [2024-07-21 11:42:00.939743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.476 11:42:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:32.476 11:42:01 -- common/autotest_common.sh@852 -- # return 0 00:18:32.476 11:42:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:32.476 11:42:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:32.476 11:42:01 -- common/autotest_common.sh@10 -- # set +x 00:18:32.476 11:42:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.476 11:42:01 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:32.476 [2024-07-21 11:42:01.804214] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:32.476 [2024-07-21 11:42:01.804312] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:32.476 [2024-07-21 11:42:01.804338] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:32.476 11:42:01 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:32.476 11:42:01 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev f2cb9f92-1016-4ede-a366-83a2cf5e3716 00:18:32.476 11:42:01 -- common/autotest_common.sh@887 -- # local bdev_name=f2cb9f92-1016-4ede-a366-83a2cf5e3716 00:18:32.476 11:42:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:32.476 11:42:01 -- common/autotest_common.sh@889 -- # local i 00:18:32.476 11:42:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:32.476 11:42:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:32.476 11:42:01 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:32.734 11:42:01 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f2cb9f92-1016-4ede-a366-83a2cf5e3716 -t 2000 00:18:32.734 [ 00:18:32.734 { 00:18:32.734 "name": "f2cb9f92-1016-4ede-a366-83a2cf5e3716", 00:18:32.734 "aliases": [ 00:18:32.734 "lvs/lvol" 00:18:32.734 ], 00:18:32.734 "product_name": "Logical Volume", 00:18:32.734 "block_size": 4096, 00:18:32.734 "num_blocks": 38912, 00:18:32.734 "uuid": "f2cb9f92-1016-4ede-a366-83a2cf5e3716", 00:18:32.734 "assigned_rate_limits": { 00:18:32.734 "rw_ios_per_sec": 0, 00:18:32.734 "rw_mbytes_per_sec": 0, 00:18:32.734 "r_mbytes_per_sec": 0, 00:18:32.734 "w_mbytes_per_sec": 0 00:18:32.734 }, 00:18:32.734 "claimed": false, 00:18:32.734 "zoned": false, 00:18:32.734 "supported_io_types": { 00:18:32.734 "read": true, 00:18:32.734 "write": true, 00:18:32.734 "unmap": true, 00:18:32.734 "write_zeroes": true, 00:18:32.734 "flush": false, 00:18:32.734 "reset": true, 00:18:32.734 "compare": false, 00:18:32.734 "compare_and_write": false, 00:18:32.734 "abort": false, 00:18:32.734 "nvme_admin": false, 00:18:32.734 "nvme_io": false 00:18:32.734 }, 00:18:32.734 "driver_specific": { 00:18:32.734 "lvol": { 00:18:32.734 "lvol_store_uuid": "b7c86d3f-3402-4994-b0ce-00c65b7ccaea", 00:18:32.734 "base_bdev": "aio_bdev", 00:18:32.735 "thin_provision": false, 00:18:32.735 "snapshot": false, 00:18:32.735 "clone": false, 00:18:32.735 "esnap_clone": false 00:18:32.735 } 00:18:32.735 } 00:18:32.735 } 00:18:32.735 ] 00:18:32.735 11:42:02 -- common/autotest_common.sh@895 -- # return 0 00:18:32.735 11:42:02 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7c86d3f-3402-4994-b0ce-00c65b7ccaea 00:18:32.735 11:42:02 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:32.992 11:42:02 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:32.992 11:42:02 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7c86d3f-3402-4994-b0ce-00c65b7ccaea 00:18:32.992 11:42:02 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:33.250 11:42:02 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:33.250 11:42:02 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:33.250 [2024-07-21 11:42:02.624555] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:33.250 11:42:02 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7c86d3f-3402-4994-b0ce-00c65b7ccaea 00:18:33.250 11:42:02 -- common/autotest_common.sh@640 -- # local es=0 00:18:33.250 11:42:02 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7c86d3f-3402-4994-b0ce-00c65b7ccaea 00:18:33.250 11:42:02 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:33.250 11:42:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:33.250 11:42:02 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:33.250 11:42:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:33.250 11:42:02 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:33.250 11:42:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:33.250 11:42:02 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:33.250 11:42:02 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:33.250 11:42:02 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7c86d3f-3402-4994-b0ce-00c65b7ccaea 00:18:33.508 request: 00:18:33.508 { 00:18:33.508 "uuid": "b7c86d3f-3402-4994-b0ce-00c65b7ccaea", 00:18:33.508 "method": "bdev_lvol_get_lvstores", 00:18:33.508 "req_id": 1 00:18:33.508 } 00:18:33.508 Got JSON-RPC error response 00:18:33.508 response: 00:18:33.508 { 00:18:33.508 "code": -19, 00:18:33.508 "message": "No such device" 00:18:33.508 } 00:18:33.508 11:42:02 -- common/autotest_common.sh@643 -- # es=1 00:18:33.508 11:42:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:33.508 11:42:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:33.508 11:42:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:33.508 11:42:02 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:33.765 aio_bdev 00:18:33.765 11:42:02 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev f2cb9f92-1016-4ede-a366-83a2cf5e3716 00:18:33.765 11:42:02 -- common/autotest_common.sh@887 -- # local bdev_name=f2cb9f92-1016-4ede-a366-83a2cf5e3716 00:18:33.765 11:42:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:33.765 11:42:02 -- common/autotest_common.sh@889 -- # local i 00:18:33.765 11:42:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:33.765 11:42:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:33.765 11:42:02 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:33.765 11:42:03 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f2cb9f92-1016-4ede-a366-83a2cf5e3716 -t 2000 00:18:34.021 [ 00:18:34.021 { 00:18:34.021 "name": "f2cb9f92-1016-4ede-a366-83a2cf5e3716", 00:18:34.021 "aliases": [ 00:18:34.021 "lvs/lvol" 00:18:34.021 ], 00:18:34.021 "product_name": "Logical Volume", 00:18:34.021 "block_size": 4096, 00:18:34.021 "num_blocks": 38912, 00:18:34.021 "uuid": "f2cb9f92-1016-4ede-a366-83a2cf5e3716", 00:18:34.021 "assigned_rate_limits": { 00:18:34.021 "rw_ios_per_sec": 0, 00:18:34.021 "rw_mbytes_per_sec": 0, 00:18:34.021 "r_mbytes_per_sec": 0, 00:18:34.021 "w_mbytes_per_sec": 0 00:18:34.021 }, 00:18:34.021 "claimed": false, 00:18:34.021 "zoned": false, 00:18:34.021 "supported_io_types": { 00:18:34.021 "read": true, 00:18:34.021 "write": true, 00:18:34.021 "unmap": true, 00:18:34.021 "write_zeroes": true, 00:18:34.021 "flush": false, 00:18:34.021 "reset": true, 00:18:34.021 "compare": false, 00:18:34.021 "compare_and_write": false, 00:18:34.021 "abort": false, 00:18:34.021 "nvme_admin": false, 00:18:34.021 "nvme_io": false 00:18:34.021 }, 00:18:34.021 "driver_specific": { 00:18:34.021 "lvol": { 00:18:34.021 "lvol_store_uuid": "b7c86d3f-3402-4994-b0ce-00c65b7ccaea", 00:18:34.021 "base_bdev": "aio_bdev", 00:18:34.021 "thin_provision": false, 00:18:34.021 "snapshot": false, 00:18:34.021 "clone": false, 00:18:34.021 "esnap_clone": false 00:18:34.021 } 00:18:34.021 } 00:18:34.021 } 00:18:34.021 ] 00:18:34.021 11:42:03 -- common/autotest_common.sh@895 -- # return 0 00:18:34.021 11:42:03 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7c86d3f-3402-4994-b0ce-00c65b7ccaea 00:18:34.021 11:42:03 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:34.278 11:42:03 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:34.278 11:42:03 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b7c86d3f-3402-4994-b0ce-00c65b7ccaea 00:18:34.278 11:42:03 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:34.278 11:42:03 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:34.278 11:42:03 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f2cb9f92-1016-4ede-a366-83a2cf5e3716 00:18:34.533 11:42:03 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b7c86d3f-3402-4994-b0ce-00c65b7ccaea 00:18:34.533 11:42:03 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:34.790 11:42:04 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:34.790 00:18:34.790 real 0m17.076s 00:18:34.790 user 0m44.199s 00:18:34.790 sys 0m3.389s 00:18:34.790 11:42:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:34.790 11:42:04 -- common/autotest_common.sh@10 -- # set +x 00:18:34.790 ************************************ 00:18:34.790 END TEST lvs_grow_dirty 00:18:34.790 ************************************ 00:18:34.790 11:42:04 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:34.790 11:42:04 -- common/autotest_common.sh@796 -- # type=--id 00:18:34.790 11:42:04 -- common/autotest_common.sh@797 -- # id=0 00:18:34.790 11:42:04 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:34.790 11:42:04 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:34.790 11:42:04 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:34.790 11:42:04 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:34.790 11:42:04 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:34.790 11:42:04 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:34.790 nvmf_trace.0 00:18:34.791 11:42:04 -- common/autotest_common.sh@811 -- # return 0 00:18:34.791 11:42:04 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:34.791 11:42:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:34.791 11:42:04 -- nvmf/common.sh@116 -- # sync 00:18:34.791 11:42:04 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:34.791 11:42:04 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:34.791 11:42:04 -- nvmf/common.sh@119 -- # set +e 00:18:34.791 11:42:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:34.791 11:42:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:34.791 rmmod nvme_rdma 00:18:35.048 rmmod nvme_fabrics 00:18:35.048 11:42:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:35.048 11:42:04 -- nvmf/common.sh@123 -- # set -e 00:18:35.048 11:42:04 -- nvmf/common.sh@124 -- # return 0 00:18:35.048 11:42:04 -- nvmf/common.sh@477 -- # '[' -n 2357380 ']' 00:18:35.048 11:42:04 -- nvmf/common.sh@478 -- # killprocess 2357380 00:18:35.048 11:42:04 -- common/autotest_common.sh@926 -- # '[' -z 2357380 ']' 00:18:35.048 11:42:04 -- common/autotest_common.sh@930 -- # kill -0 2357380 00:18:35.048 11:42:04 -- common/autotest_common.sh@931 -- # uname 00:18:35.048 11:42:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:35.048 11:42:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2357380 00:18:35.048 11:42:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:35.048 11:42:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:35.048 11:42:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2357380' 00:18:35.048 killing process with pid 2357380 00:18:35.048 11:42:04 -- common/autotest_common.sh@945 -- # kill 2357380 00:18:35.048 11:42:04 -- common/autotest_common.sh@950 -- # wait 2357380 00:18:35.048 11:42:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:35.048 11:42:04 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:35.048 00:18:35.048 real 0m42.243s 00:18:35.048 user 1m5.737s 00:18:35.048 sys 0m11.226s 00:18:35.048 11:42:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:35.048 11:42:04 -- common/autotest_common.sh@10 -- # set +x 00:18:35.048 ************************************ 00:18:35.048 END TEST nvmf_lvs_grow 00:18:35.048 ************************************ 00:18:35.306 11:42:04 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:35.306 11:42:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:35.306 11:42:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:35.306 11:42:04 -- common/autotest_common.sh@10 -- # set +x 00:18:35.306 ************************************ 00:18:35.306 START TEST nvmf_bdev_io_wait 00:18:35.306 ************************************ 00:18:35.306 11:42:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:35.306 * Looking for test storage... 00:18:35.306 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:35.306 11:42:04 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.306 11:42:04 -- nvmf/common.sh@7 -- # uname -s 00:18:35.306 11:42:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.306 11:42:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.306 11:42:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.306 11:42:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.306 11:42:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.306 11:42:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.306 11:42:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.306 11:42:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.306 11:42:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.306 11:42:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.306 11:42:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:35.306 11:42:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:35.306 11:42:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.306 11:42:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.306 11:42:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:35.306 11:42:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:35.306 11:42:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.306 11:42:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.306 11:42:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.306 11:42:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.306 11:42:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.306 11:42:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.306 11:42:04 -- paths/export.sh@5 -- # export PATH 00:18:35.306 11:42:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.306 11:42:04 -- nvmf/common.sh@46 -- # : 0 00:18:35.306 11:42:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:35.306 11:42:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:35.306 11:42:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:35.306 11:42:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.306 11:42:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.306 11:42:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:35.306 11:42:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:35.306 11:42:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:35.306 11:42:04 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:35.306 11:42:04 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:35.306 11:42:04 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:35.306 11:42:04 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:35.306 11:42:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.306 11:42:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:35.306 11:42:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:35.306 11:42:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:35.306 11:42:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.306 11:42:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.306 11:42:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.306 11:42:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:35.306 11:42:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:35.306 11:42:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:35.306 11:42:04 -- common/autotest_common.sh@10 -- # set +x 00:18:43.433 11:42:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:43.433 11:42:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:43.433 11:42:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:43.433 11:42:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:43.433 11:42:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:43.433 11:42:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:43.433 11:42:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:43.434 11:42:12 -- nvmf/common.sh@294 -- # net_devs=() 00:18:43.434 11:42:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:43.434 11:42:12 -- nvmf/common.sh@295 -- # e810=() 00:18:43.434 11:42:12 -- nvmf/common.sh@295 -- # local -ga e810 00:18:43.434 11:42:12 -- nvmf/common.sh@296 -- # x722=() 00:18:43.434 11:42:12 -- nvmf/common.sh@296 -- # local -ga x722 00:18:43.434 11:42:12 -- nvmf/common.sh@297 -- # mlx=() 00:18:43.434 11:42:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:43.434 11:42:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:43.434 11:42:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:43.434 11:42:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:43.434 11:42:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:43.434 11:42:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:43.434 11:42:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:43.434 11:42:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:43.434 11:42:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:43.434 11:42:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:43.434 11:42:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:43.434 11:42:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:43.434 11:42:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:43.434 11:42:12 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:43.434 11:42:12 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:43.434 11:42:12 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:43.434 11:42:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:43.434 11:42:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:43.434 11:42:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:43.434 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:43.434 11:42:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:43.434 11:42:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:43.434 11:42:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:43.434 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:43.434 11:42:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:43.434 11:42:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:43.434 11:42:12 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:43.434 11:42:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.434 11:42:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:43.434 11:42:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.434 11:42:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:43.434 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:43.434 11:42:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.434 11:42:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:43.434 11:42:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.434 11:42:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:43.434 11:42:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.434 11:42:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:43.434 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:43.434 11:42:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.434 11:42:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:43.434 11:42:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:43.434 11:42:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:43.434 11:42:12 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:43.434 11:42:12 -- nvmf/common.sh@57 -- # uname 00:18:43.434 11:42:12 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:43.434 11:42:12 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:43.434 11:42:12 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:43.434 11:42:12 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:43.434 11:42:12 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:43.434 11:42:12 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:43.434 11:42:12 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:43.434 11:42:12 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:43.434 11:42:12 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:43.434 11:42:12 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:43.434 11:42:12 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:43.434 11:42:12 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:43.434 11:42:12 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:43.434 11:42:12 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:43.434 11:42:12 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:43.434 11:42:12 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:43.434 11:42:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:43.434 11:42:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:43.434 11:42:12 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:43.434 11:42:12 -- nvmf/common.sh@104 -- # continue 2 00:18:43.434 11:42:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:43.434 11:42:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:43.434 11:42:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:43.434 11:42:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:43.434 11:42:12 -- nvmf/common.sh@104 -- # continue 2 00:18:43.434 11:42:12 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:43.434 11:42:12 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:43.434 11:42:12 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:43.434 11:42:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:43.434 11:42:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:43.434 11:42:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:43.434 11:42:12 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:43.434 11:42:12 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:43.434 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:43.434 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:43.434 altname enp217s0f0np0 00:18:43.434 altname ens818f0np0 00:18:43.434 inet 192.168.100.8/24 scope global mlx_0_0 00:18:43.434 valid_lft forever preferred_lft forever 00:18:43.434 11:42:12 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:43.434 11:42:12 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:43.434 11:42:12 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:43.434 11:42:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:43.434 11:42:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:43.434 11:42:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:43.434 11:42:12 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:43.434 11:42:12 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:43.434 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:43.434 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:43.434 altname enp217s0f1np1 00:18:43.434 altname ens818f1np1 00:18:43.434 inet 192.168.100.9/24 scope global mlx_0_1 00:18:43.434 valid_lft forever preferred_lft forever 00:18:43.434 11:42:12 -- nvmf/common.sh@410 -- # return 0 00:18:43.434 11:42:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:43.434 11:42:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:43.434 11:42:12 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:43.434 11:42:12 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:43.434 11:42:12 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:43.434 11:42:12 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:43.434 11:42:12 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:43.434 11:42:12 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:43.434 11:42:12 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:43.691 11:42:12 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:43.691 11:42:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:43.691 11:42:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:43.691 11:42:12 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:43.692 11:42:12 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:43.692 11:42:12 -- nvmf/common.sh@104 -- # continue 2 00:18:43.692 11:42:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:43.692 11:42:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:43.692 11:42:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:43.692 11:42:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:43.692 11:42:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:43.692 11:42:12 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:43.692 11:42:12 -- nvmf/common.sh@104 -- # continue 2 00:18:43.692 11:42:12 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:43.692 11:42:12 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:43.692 11:42:12 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:43.692 11:42:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:43.692 11:42:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:43.692 11:42:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:43.692 11:42:12 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:43.692 11:42:12 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:43.692 11:42:12 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:43.692 11:42:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:43.692 11:42:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:43.692 11:42:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:43.692 11:42:12 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:43.692 192.168.100.9' 00:18:43.692 11:42:12 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:43.692 192.168.100.9' 00:18:43.692 11:42:12 -- nvmf/common.sh@445 -- # head -n 1 00:18:43.692 11:42:12 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:43.692 11:42:12 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:43.692 192.168.100.9' 00:18:43.692 11:42:12 -- nvmf/common.sh@446 -- # tail -n +2 00:18:43.692 11:42:12 -- nvmf/common.sh@446 -- # head -n 1 00:18:43.692 11:42:12 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:43.692 11:42:12 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:43.692 11:42:12 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:43.692 11:42:12 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:43.692 11:42:12 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:43.692 11:42:12 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:43.692 11:42:12 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:43.692 11:42:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:43.692 11:42:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:43.692 11:42:12 -- common/autotest_common.sh@10 -- # set +x 00:18:43.692 11:42:12 -- nvmf/common.sh@469 -- # nvmfpid=2362658 00:18:43.692 11:42:12 -- nvmf/common.sh@470 -- # waitforlisten 2362658 00:18:43.692 11:42:12 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:43.692 11:42:12 -- common/autotest_common.sh@819 -- # '[' -z 2362658 ']' 00:18:43.692 11:42:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.692 11:42:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:43.692 11:42:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.692 11:42:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:43.692 11:42:12 -- common/autotest_common.sh@10 -- # set +x 00:18:43.692 [2024-07-21 11:42:12.992777] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:43.692 [2024-07-21 11:42:12.992827] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.692 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.692 [2024-07-21 11:42:13.079837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:43.948 [2024-07-21 11:42:13.119657] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:43.948 [2024-07-21 11:42:13.119764] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.948 [2024-07-21 11:42:13.119773] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.948 [2024-07-21 11:42:13.119783] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.948 [2024-07-21 11:42:13.119832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.948 [2024-07-21 11:42:13.119937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.948 [2024-07-21 11:42:13.120023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:43.948 [2024-07-21 11:42:13.120024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.513 11:42:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:44.513 11:42:13 -- common/autotest_common.sh@852 -- # return 0 00:18:44.513 11:42:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:44.513 11:42:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:44.513 11:42:13 -- common/autotest_common.sh@10 -- # set +x 00:18:44.513 11:42:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.513 11:42:13 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:44.513 11:42:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:44.513 11:42:13 -- common/autotest_common.sh@10 -- # set +x 00:18:44.513 11:42:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:44.513 11:42:13 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:44.513 11:42:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:44.513 11:42:13 -- common/autotest_common.sh@10 -- # set +x 00:18:44.513 11:42:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:44.513 11:42:13 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:44.513 11:42:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:44.513 11:42:13 -- common/autotest_common.sh@10 -- # set +x 00:18:44.513 [2024-07-21 11:42:13.931265] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2006410/0x200a900) succeed. 00:18:44.770 [2024-07-21 11:42:13.941284] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2007a00/0x204bf90) succeed. 00:18:44.770 11:42:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:44.770 11:42:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:44.770 11:42:14 -- common/autotest_common.sh@10 -- # set +x 00:18:44.770 Malloc0 00:18:44.770 11:42:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:44.770 11:42:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:44.770 11:42:14 -- common/autotest_common.sh@10 -- # set +x 00:18:44.770 11:42:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:44.770 11:42:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:44.770 11:42:14 -- common/autotest_common.sh@10 -- # set +x 00:18:44.770 11:42:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:44.770 11:42:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:44.770 11:42:14 -- common/autotest_common.sh@10 -- # set +x 00:18:44.770 [2024-07-21 11:42:14.115403] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:44.770 11:42:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2362762 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@30 -- # READ_PID=2362765 00:18:44.770 11:42:14 -- nvmf/common.sh@520 -- # config=() 00:18:44.770 11:42:14 -- nvmf/common.sh@520 -- # local subsystem config 00:18:44.770 11:42:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:44.770 11:42:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:44.770 { 00:18:44.770 "params": { 00:18:44.770 "name": "Nvme$subsystem", 00:18:44.770 "trtype": "$TEST_TRANSPORT", 00:18:44.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.770 "adrfam": "ipv4", 00:18:44.770 "trsvcid": "$NVMF_PORT", 00:18:44.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.770 "hdgst": ${hdgst:-false}, 00:18:44.770 "ddgst": ${ddgst:-false} 00:18:44.770 }, 00:18:44.770 "method": "bdev_nvme_attach_controller" 00:18:44.770 } 00:18:44.770 EOF 00:18:44.770 )") 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2362768 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:44.770 11:42:14 -- nvmf/common.sh@520 -- # config=() 00:18:44.770 11:42:14 -- nvmf/common.sh@520 -- # local subsystem config 00:18:44.770 11:42:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:44.770 11:42:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:44.770 { 00:18:44.770 "params": { 00:18:44.770 "name": "Nvme$subsystem", 00:18:44.770 "trtype": "$TEST_TRANSPORT", 00:18:44.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.770 "adrfam": "ipv4", 00:18:44.770 "trsvcid": "$NVMF_PORT", 00:18:44.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.770 "hdgst": ${hdgst:-false}, 00:18:44.770 "ddgst": ${ddgst:-false} 00:18:44.770 }, 00:18:44.770 "method": "bdev_nvme_attach_controller" 00:18:44.770 } 00:18:44.770 EOF 00:18:44.770 )") 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2362772 00:18:44.770 11:42:14 -- nvmf/common.sh@542 -- # cat 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@35 -- # sync 00:18:44.770 11:42:14 -- nvmf/common.sh@520 -- # config=() 00:18:44.770 11:42:14 -- nvmf/common.sh@520 -- # local subsystem config 00:18:44.770 11:42:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:44.770 11:42:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:44.770 { 00:18:44.770 "params": { 00:18:44.770 "name": "Nvme$subsystem", 00:18:44.770 "trtype": "$TEST_TRANSPORT", 00:18:44.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.770 "adrfam": "ipv4", 00:18:44.770 "trsvcid": "$NVMF_PORT", 00:18:44.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.770 "hdgst": ${hdgst:-false}, 00:18:44.770 "ddgst": ${ddgst:-false} 00:18:44.770 }, 00:18:44.770 "method": "bdev_nvme_attach_controller" 00:18:44.770 } 00:18:44.770 EOF 00:18:44.770 )") 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:44.770 11:42:14 -- nvmf/common.sh@520 -- # config=() 00:18:44.770 11:42:14 -- nvmf/common.sh@542 -- # cat 00:18:44.770 11:42:14 -- nvmf/common.sh@520 -- # local subsystem config 00:18:44.770 11:42:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:44.770 11:42:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:44.770 { 00:18:44.770 "params": { 00:18:44.770 "name": "Nvme$subsystem", 00:18:44.770 "trtype": "$TEST_TRANSPORT", 00:18:44.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.770 "adrfam": "ipv4", 00:18:44.770 "trsvcid": "$NVMF_PORT", 00:18:44.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.770 "hdgst": ${hdgst:-false}, 00:18:44.770 "ddgst": ${ddgst:-false} 00:18:44.770 }, 00:18:44.770 "method": "bdev_nvme_attach_controller" 00:18:44.770 } 00:18:44.770 EOF 00:18:44.770 )") 00:18:44.770 11:42:14 -- nvmf/common.sh@542 -- # cat 00:18:44.770 11:42:14 -- target/bdev_io_wait.sh@37 -- # wait 2362762 00:18:44.770 11:42:14 -- nvmf/common.sh@542 -- # cat 00:18:44.770 11:42:14 -- nvmf/common.sh@544 -- # jq . 00:18:44.770 11:42:14 -- nvmf/common.sh@544 -- # jq . 00:18:44.770 11:42:14 -- nvmf/common.sh@544 -- # jq . 00:18:44.770 11:42:14 -- nvmf/common.sh@545 -- # IFS=, 00:18:44.770 11:42:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:44.770 "params": { 00:18:44.770 "name": "Nvme1", 00:18:44.770 "trtype": "rdma", 00:18:44.770 "traddr": "192.168.100.8", 00:18:44.770 "adrfam": "ipv4", 00:18:44.771 "trsvcid": "4420", 00:18:44.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.771 "hdgst": false, 00:18:44.771 "ddgst": false 00:18:44.771 }, 00:18:44.771 "method": "bdev_nvme_attach_controller" 00:18:44.771 }' 00:18:44.771 11:42:14 -- nvmf/common.sh@544 -- # jq . 00:18:44.771 11:42:14 -- nvmf/common.sh@545 -- # IFS=, 00:18:44.771 11:42:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:44.771 "params": { 00:18:44.771 "name": "Nvme1", 00:18:44.771 "trtype": "rdma", 00:18:44.771 "traddr": "192.168.100.8", 00:18:44.771 "adrfam": "ipv4", 00:18:44.771 "trsvcid": "4420", 00:18:44.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.771 "hdgst": false, 00:18:44.771 "ddgst": false 00:18:44.771 }, 00:18:44.771 "method": "bdev_nvme_attach_controller" 00:18:44.771 }' 00:18:44.771 11:42:14 -- nvmf/common.sh@545 -- # IFS=, 00:18:44.771 11:42:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:44.771 "params": { 00:18:44.771 "name": "Nvme1", 00:18:44.771 "trtype": "rdma", 00:18:44.771 "traddr": "192.168.100.8", 00:18:44.771 "adrfam": "ipv4", 00:18:44.771 "trsvcid": "4420", 00:18:44.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.771 "hdgst": false, 00:18:44.771 "ddgst": false 00:18:44.771 }, 00:18:44.771 "method": "bdev_nvme_attach_controller" 00:18:44.771 }' 00:18:44.771 11:42:14 -- nvmf/common.sh@545 -- # IFS=, 00:18:44.771 11:42:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:44.771 "params": { 00:18:44.771 "name": "Nvme1", 00:18:44.771 "trtype": "rdma", 00:18:44.771 "traddr": "192.168.100.8", 00:18:44.771 "adrfam": "ipv4", 00:18:44.771 "trsvcid": "4420", 00:18:44.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.771 "hdgst": false, 00:18:44.771 "ddgst": false 00:18:44.771 }, 00:18:44.771 "method": "bdev_nvme_attach_controller" 00:18:44.771 }' 00:18:44.771 [2024-07-21 11:42:14.163452] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:44.771 [2024-07-21 11:42:14.163511] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:44.771 [2024-07-21 11:42:14.165777] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:44.771 [2024-07-21 11:42:14.165828] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:44.771 [2024-07-21 11:42:14.168038] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:44.771 [2024-07-21 11:42:14.168088] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:44.771 [2024-07-21 11:42:14.168619] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:44.771 [2024-07-21 11:42:14.168678] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:45.028 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.028 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.028 [2024-07-21 11:42:14.371086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.028 [2024-07-21 11:42:14.394482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:45.028 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.286 [2024-07-21 11:42:14.472417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.286 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.286 [2024-07-21 11:42:14.499777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:45.286 [2024-07-21 11:42:14.533705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.286 [2024-07-21 11:42:14.555154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:45.286 [2024-07-21 11:42:14.633900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.286 [2024-07-21 11:42:14.665293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:45.544 Running I/O for 1 seconds... 00:18:45.544 Running I/O for 1 seconds... 00:18:45.544 Running I/O for 1 seconds... 00:18:45.544 Running I/O for 1 seconds... 00:18:46.474 00:18:46.474 Latency(us) 00:18:46.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.474 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:46.474 Nvme1n1 : 1.00 266509.37 1041.05 0.00 0.00 478.97 189.24 1966.08 00:18:46.474 =================================================================================================================== 00:18:46.474 Total : 266509.37 1041.05 0.00 0.00 478.97 189.24 1966.08 00:18:46.474 00:18:46.474 Latency(us) 00:18:46.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.474 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:46.474 Nvme1n1 : 1.01 18409.16 71.91 0.00 0.00 6932.59 3853.52 13526.63 00:18:46.474 =================================================================================================================== 00:18:46.474 Total : 18409.16 71.91 0.00 0.00 6932.59 3853.52 13526.63 00:18:46.474 00:18:46.474 Latency(us) 00:18:46.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.474 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:46.474 Nvme1n1 : 1.00 17481.48 68.29 0.00 0.00 7301.97 4639.95 19084.08 00:18:46.474 =================================================================================================================== 00:18:46.474 Total : 17481.48 68.29 0.00 0.00 7301.97 4639.95 19084.08 00:18:46.474 00:18:46.474 Latency(us) 00:18:46.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.474 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:46.474 Nvme1n1 : 1.00 14824.80 57.91 0.00 0.00 8610.90 4823.45 19922.94 00:18:46.474 =================================================================================================================== 00:18:46.474 Total : 14824.80 57.91 0.00 0.00 8610.90 4823.45 19922.94 00:18:46.732 11:42:16 -- target/bdev_io_wait.sh@38 -- # wait 2362765 00:18:46.732 11:42:16 -- target/bdev_io_wait.sh@39 -- # wait 2362768 00:18:46.732 11:42:16 -- target/bdev_io_wait.sh@40 -- # wait 2362772 00:18:46.732 11:42:16 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:46.732 11:42:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.732 11:42:16 -- common/autotest_common.sh@10 -- # set +x 00:18:46.989 11:42:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.989 11:42:16 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:46.989 11:42:16 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:46.989 11:42:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:46.989 11:42:16 -- nvmf/common.sh@116 -- # sync 00:18:46.989 11:42:16 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:46.989 11:42:16 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:46.989 11:42:16 -- nvmf/common.sh@119 -- # set +e 00:18:46.989 11:42:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:46.989 11:42:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:46.989 rmmod nvme_rdma 00:18:46.989 rmmod nvme_fabrics 00:18:46.989 11:42:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:46.989 11:42:16 -- nvmf/common.sh@123 -- # set -e 00:18:46.989 11:42:16 -- nvmf/common.sh@124 -- # return 0 00:18:46.989 11:42:16 -- nvmf/common.sh@477 -- # '[' -n 2362658 ']' 00:18:46.989 11:42:16 -- nvmf/common.sh@478 -- # killprocess 2362658 00:18:46.989 11:42:16 -- common/autotest_common.sh@926 -- # '[' -z 2362658 ']' 00:18:46.989 11:42:16 -- common/autotest_common.sh@930 -- # kill -0 2362658 00:18:46.989 11:42:16 -- common/autotest_common.sh@931 -- # uname 00:18:46.989 11:42:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:46.989 11:42:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2362658 00:18:46.989 11:42:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:46.989 11:42:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:46.989 11:42:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2362658' 00:18:46.989 killing process with pid 2362658 00:18:46.989 11:42:16 -- common/autotest_common.sh@945 -- # kill 2362658 00:18:46.989 11:42:16 -- common/autotest_common.sh@950 -- # wait 2362658 00:18:47.247 11:42:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:47.247 11:42:16 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:47.247 00:18:47.247 real 0m12.002s 00:18:47.247 user 0m21.227s 00:18:47.247 sys 0m7.771s 00:18:47.247 11:42:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.247 11:42:16 -- common/autotest_common.sh@10 -- # set +x 00:18:47.247 ************************************ 00:18:47.247 END TEST nvmf_bdev_io_wait 00:18:47.247 ************************************ 00:18:47.247 11:42:16 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:47.247 11:42:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:47.247 11:42:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:47.247 11:42:16 -- common/autotest_common.sh@10 -- # set +x 00:18:47.247 ************************************ 00:18:47.247 START TEST nvmf_queue_depth 00:18:47.247 ************************************ 00:18:47.247 11:42:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:47.247 * Looking for test storage... 00:18:47.247 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:47.247 11:42:16 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.247 11:42:16 -- nvmf/common.sh@7 -- # uname -s 00:18:47.247 11:42:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.505 11:42:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.505 11:42:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.505 11:42:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.505 11:42:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.505 11:42:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.505 11:42:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.505 11:42:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.505 11:42:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.506 11:42:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.506 11:42:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:47.506 11:42:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:47.506 11:42:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.506 11:42:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.506 11:42:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.506 11:42:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:47.506 11:42:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.506 11:42:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.506 11:42:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.506 11:42:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.506 11:42:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.506 11:42:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.506 11:42:16 -- paths/export.sh@5 -- # export PATH 00:18:47.506 11:42:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.506 11:42:16 -- nvmf/common.sh@46 -- # : 0 00:18:47.506 11:42:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:47.506 11:42:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:47.506 11:42:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:47.506 11:42:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.506 11:42:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.506 11:42:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:47.506 11:42:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:47.506 11:42:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:47.506 11:42:16 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:47.506 11:42:16 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:47.506 11:42:16 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:47.506 11:42:16 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:47.506 11:42:16 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:47.506 11:42:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.506 11:42:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:47.506 11:42:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:47.506 11:42:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:47.506 11:42:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.506 11:42:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.506 11:42:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.506 11:42:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:47.506 11:42:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:47.506 11:42:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:47.506 11:42:16 -- common/autotest_common.sh@10 -- # set +x 00:18:55.618 11:42:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:55.618 11:42:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:55.618 11:42:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:55.618 11:42:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:55.618 11:42:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:55.618 11:42:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:55.618 11:42:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:55.618 11:42:24 -- nvmf/common.sh@294 -- # net_devs=() 00:18:55.618 11:42:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:55.618 11:42:24 -- nvmf/common.sh@295 -- # e810=() 00:18:55.618 11:42:24 -- nvmf/common.sh@295 -- # local -ga e810 00:18:55.618 11:42:24 -- nvmf/common.sh@296 -- # x722=() 00:18:55.618 11:42:24 -- nvmf/common.sh@296 -- # local -ga x722 00:18:55.618 11:42:24 -- nvmf/common.sh@297 -- # mlx=() 00:18:55.618 11:42:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:55.618 11:42:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.618 11:42:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.618 11:42:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.618 11:42:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.618 11:42:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.618 11:42:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.618 11:42:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.618 11:42:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.618 11:42:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.618 11:42:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.618 11:42:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.618 11:42:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:55.618 11:42:24 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:55.618 11:42:24 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:55.618 11:42:24 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:55.618 11:42:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:55.618 11:42:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:55.618 11:42:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:55.618 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:55.618 11:42:24 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:55.618 11:42:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:55.618 11:42:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:55.618 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:55.618 11:42:24 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:55.618 11:42:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:55.618 11:42:24 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:55.618 11:42:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.618 11:42:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:55.618 11:42:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.618 11:42:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:55.618 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:55.618 11:42:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.618 11:42:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:55.618 11:42:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.618 11:42:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:55.618 11:42:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.618 11:42:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:55.618 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:55.618 11:42:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.618 11:42:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:55.618 11:42:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:55.618 11:42:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:55.618 11:42:24 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:55.618 11:42:24 -- nvmf/common.sh@57 -- # uname 00:18:55.618 11:42:24 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:55.618 11:42:24 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:55.618 11:42:24 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:55.618 11:42:24 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:55.618 11:42:24 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:55.618 11:42:24 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:55.618 11:42:24 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:55.618 11:42:24 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:55.618 11:42:24 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:55.618 11:42:24 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:55.618 11:42:24 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:55.618 11:42:24 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:55.618 11:42:24 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:55.618 11:42:24 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:55.618 11:42:24 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:55.618 11:42:24 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:55.618 11:42:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:55.618 11:42:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.618 11:42:24 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:55.618 11:42:24 -- nvmf/common.sh@104 -- # continue 2 00:18:55.618 11:42:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:55.618 11:42:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.618 11:42:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.618 11:42:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:55.618 11:42:24 -- nvmf/common.sh@104 -- # continue 2 00:18:55.618 11:42:24 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:55.618 11:42:24 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:55.618 11:42:24 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:55.618 11:42:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:55.618 11:42:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:55.618 11:42:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:55.618 11:42:24 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:55.618 11:42:24 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:55.618 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:55.618 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:55.618 altname enp217s0f0np0 00:18:55.618 altname ens818f0np0 00:18:55.618 inet 192.168.100.8/24 scope global mlx_0_0 00:18:55.618 valid_lft forever preferred_lft forever 00:18:55.618 11:42:24 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:55.618 11:42:24 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:55.618 11:42:24 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:55.618 11:42:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:55.618 11:42:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:55.618 11:42:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:55.618 11:42:24 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:55.618 11:42:24 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:55.618 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:55.618 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:55.618 altname enp217s0f1np1 00:18:55.618 altname ens818f1np1 00:18:55.618 inet 192.168.100.9/24 scope global mlx_0_1 00:18:55.618 valid_lft forever preferred_lft forever 00:18:55.618 11:42:24 -- nvmf/common.sh@410 -- # return 0 00:18:55.618 11:42:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:55.618 11:42:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:55.618 11:42:24 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:55.618 11:42:24 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:55.618 11:42:24 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:55.618 11:42:24 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:55.618 11:42:24 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:55.618 11:42:24 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:55.618 11:42:24 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:55.618 11:42:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:55.618 11:42:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.618 11:42:24 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:55.618 11:42:24 -- nvmf/common.sh@104 -- # continue 2 00:18:55.618 11:42:24 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:55.618 11:42:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.618 11:42:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.618 11:42:24 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:55.618 11:42:24 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:55.618 11:42:24 -- nvmf/common.sh@104 -- # continue 2 00:18:55.618 11:42:24 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:55.618 11:42:24 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:55.618 11:42:24 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:55.618 11:42:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:55.618 11:42:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:55.618 11:42:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:55.619 11:42:24 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:55.619 11:42:24 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:55.619 11:42:24 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:55.619 11:42:24 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:55.619 11:42:24 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:55.619 11:42:24 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:55.619 11:42:24 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:55.619 192.168.100.9' 00:18:55.619 11:42:24 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:55.619 192.168.100.9' 00:18:55.619 11:42:24 -- nvmf/common.sh@445 -- # head -n 1 00:18:55.619 11:42:25 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:55.619 11:42:25 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:55.619 192.168.100.9' 00:18:55.619 11:42:25 -- nvmf/common.sh@446 -- # tail -n +2 00:18:55.619 11:42:25 -- nvmf/common.sh@446 -- # head -n 1 00:18:55.619 11:42:25 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:55.619 11:42:25 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:55.619 11:42:25 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:55.619 11:42:25 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:55.619 11:42:25 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:55.619 11:42:25 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:55.876 11:42:25 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:55.876 11:42:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:55.876 11:42:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:55.876 11:42:25 -- common/autotest_common.sh@10 -- # set +x 00:18:55.876 11:42:25 -- nvmf/common.sh@469 -- # nvmfpid=2367204 00:18:55.876 11:42:25 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:55.876 11:42:25 -- nvmf/common.sh@470 -- # waitforlisten 2367204 00:18:55.876 11:42:25 -- common/autotest_common.sh@819 -- # '[' -z 2367204 ']' 00:18:55.876 11:42:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.876 11:42:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:55.876 11:42:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.876 11:42:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:55.876 11:42:25 -- common/autotest_common.sh@10 -- # set +x 00:18:55.877 [2024-07-21 11:42:25.094014] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:55.877 [2024-07-21 11:42:25.094065] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.877 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.877 [2024-07-21 11:42:25.178611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.877 [2024-07-21 11:42:25.214460] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:55.877 [2024-07-21 11:42:25.214591] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.877 [2024-07-21 11:42:25.214601] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.877 [2024-07-21 11:42:25.214611] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.877 [2024-07-21 11:42:25.214646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.808 11:42:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:56.808 11:42:25 -- common/autotest_common.sh@852 -- # return 0 00:18:56.808 11:42:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:56.808 11:42:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:56.808 11:42:25 -- common/autotest_common.sh@10 -- # set +x 00:18:56.808 11:42:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.808 11:42:25 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:56.808 11:42:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.808 11:42:25 -- common/autotest_common.sh@10 -- # set +x 00:18:56.808 [2024-07-21 11:42:25.949300] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1717620/0x171bb10) succeed. 00:18:56.808 [2024-07-21 11:42:25.957770] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1718b20/0x175d1a0) succeed. 00:18:56.808 11:42:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.808 11:42:26 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:56.808 11:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.808 11:42:26 -- common/autotest_common.sh@10 -- # set +x 00:18:56.808 Malloc0 00:18:56.808 11:42:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.808 11:42:26 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:56.808 11:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.808 11:42:26 -- common/autotest_common.sh@10 -- # set +x 00:18:56.808 11:42:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.808 11:42:26 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:56.808 11:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.808 11:42:26 -- common/autotest_common.sh@10 -- # set +x 00:18:56.808 11:42:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.808 11:42:26 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:56.808 11:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.808 11:42:26 -- common/autotest_common.sh@10 -- # set +x 00:18:56.808 [2024-07-21 11:42:26.042868] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:56.808 11:42:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.808 11:42:26 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:56.808 11:42:26 -- target/queue_depth.sh@30 -- # bdevperf_pid=2367465 00:18:56.808 11:42:26 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:56.808 11:42:26 -- target/queue_depth.sh@33 -- # waitforlisten 2367465 /var/tmp/bdevperf.sock 00:18:56.808 11:42:26 -- common/autotest_common.sh@819 -- # '[' -z 2367465 ']' 00:18:56.808 11:42:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.808 11:42:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:56.808 11:42:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.808 11:42:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:56.808 11:42:26 -- common/autotest_common.sh@10 -- # set +x 00:18:56.808 [2024-07-21 11:42:26.076460] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:56.808 [2024-07-21 11:42:26.076508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367465 ] 00:18:56.808 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.808 [2024-07-21 11:42:26.155474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.808 [2024-07-21 11:42:26.192043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.739 11:42:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:57.739 11:42:26 -- common/autotest_common.sh@852 -- # return 0 00:18:57.739 11:42:26 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:57.739 11:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:57.739 11:42:26 -- common/autotest_common.sh@10 -- # set +x 00:18:57.739 NVMe0n1 00:18:57.739 11:42:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:57.739 11:42:26 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:57.739 Running I/O for 10 seconds... 00:19:07.707 00:19:07.707 Latency(us) 00:19:07.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.707 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:07.707 Verification LBA range: start 0x0 length 0x4000 00:19:07.707 NVMe0n1 : 10.03 29499.19 115.23 0.00 0.00 34635.40 7759.46 31457.28 00:19:07.707 =================================================================================================================== 00:19:07.707 Total : 29499.19 115.23 0.00 0.00 34635.40 7759.46 31457.28 00:19:07.707 0 00:19:07.707 11:42:37 -- target/queue_depth.sh@39 -- # killprocess 2367465 00:19:07.707 11:42:37 -- common/autotest_common.sh@926 -- # '[' -z 2367465 ']' 00:19:07.707 11:42:37 -- common/autotest_common.sh@930 -- # kill -0 2367465 00:19:07.707 11:42:37 -- common/autotest_common.sh@931 -- # uname 00:19:07.707 11:42:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:07.707 11:42:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2367465 00:19:07.966 11:42:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:07.966 11:42:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:07.966 11:42:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2367465' 00:19:07.966 killing process with pid 2367465 00:19:07.966 11:42:37 -- common/autotest_common.sh@945 -- # kill 2367465 00:19:07.966 Received shutdown signal, test time was about 10.000000 seconds 00:19:07.966 00:19:07.966 Latency(us) 00:19:07.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.966 =================================================================================================================== 00:19:07.966 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:07.966 11:42:37 -- common/autotest_common.sh@950 -- # wait 2367465 00:19:07.966 11:42:37 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:07.966 11:42:37 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:07.966 11:42:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:07.966 11:42:37 -- nvmf/common.sh@116 -- # sync 00:19:07.966 11:42:37 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:07.966 11:42:37 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:07.966 11:42:37 -- nvmf/common.sh@119 -- # set +e 00:19:07.966 11:42:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:07.966 11:42:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:07.966 rmmod nvme_rdma 00:19:07.966 rmmod nvme_fabrics 00:19:07.966 11:42:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:08.224 11:42:37 -- nvmf/common.sh@123 -- # set -e 00:19:08.224 11:42:37 -- nvmf/common.sh@124 -- # return 0 00:19:08.224 11:42:37 -- nvmf/common.sh@477 -- # '[' -n 2367204 ']' 00:19:08.225 11:42:37 -- nvmf/common.sh@478 -- # killprocess 2367204 00:19:08.225 11:42:37 -- common/autotest_common.sh@926 -- # '[' -z 2367204 ']' 00:19:08.225 11:42:37 -- common/autotest_common.sh@930 -- # kill -0 2367204 00:19:08.225 11:42:37 -- common/autotest_common.sh@931 -- # uname 00:19:08.225 11:42:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:08.225 11:42:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2367204 00:19:08.225 11:42:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:08.225 11:42:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:08.225 11:42:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2367204' 00:19:08.225 killing process with pid 2367204 00:19:08.225 11:42:37 -- common/autotest_common.sh@945 -- # kill 2367204 00:19:08.225 11:42:37 -- common/autotest_common.sh@950 -- # wait 2367204 00:19:08.514 11:42:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:08.514 11:42:37 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:08.514 00:19:08.514 real 0m21.111s 00:19:08.514 user 0m26.433s 00:19:08.514 sys 0m6.988s 00:19:08.514 11:42:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:08.514 11:42:37 -- common/autotest_common.sh@10 -- # set +x 00:19:08.514 ************************************ 00:19:08.514 END TEST nvmf_queue_depth 00:19:08.514 ************************************ 00:19:08.514 11:42:37 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:08.514 11:42:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:08.514 11:42:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:08.514 11:42:37 -- common/autotest_common.sh@10 -- # set +x 00:19:08.514 ************************************ 00:19:08.514 START TEST nvmf_multipath 00:19:08.514 ************************************ 00:19:08.514 11:42:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:08.514 * Looking for test storage... 00:19:08.514 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:08.514 11:42:37 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.514 11:42:37 -- nvmf/common.sh@7 -- # uname -s 00:19:08.514 11:42:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.514 11:42:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.514 11:42:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.514 11:42:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.514 11:42:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.514 11:42:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.514 11:42:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.514 11:42:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.514 11:42:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.514 11:42:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.514 11:42:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:08.514 11:42:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:08.514 11:42:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.514 11:42:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.514 11:42:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.515 11:42:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:08.515 11:42:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.515 11:42:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.515 11:42:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.515 11:42:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.515 11:42:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.515 11:42:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.515 11:42:37 -- paths/export.sh@5 -- # export PATH 00:19:08.515 11:42:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.515 11:42:37 -- nvmf/common.sh@46 -- # : 0 00:19:08.515 11:42:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:08.515 11:42:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:08.515 11:42:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:08.515 11:42:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.515 11:42:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.515 11:42:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:08.515 11:42:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:08.515 11:42:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:08.515 11:42:37 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:08.515 11:42:37 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:08.515 11:42:37 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:08.515 11:42:37 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:08.515 11:42:37 -- target/multipath.sh@43 -- # nvmftestinit 00:19:08.515 11:42:37 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:08.515 11:42:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.515 11:42:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:08.515 11:42:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:08.515 11:42:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:08.515 11:42:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.515 11:42:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.515 11:42:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.515 11:42:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:08.515 11:42:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:08.515 11:42:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:08.515 11:42:37 -- common/autotest_common.sh@10 -- # set +x 00:19:16.634 11:42:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:16.634 11:42:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:16.634 11:42:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:16.634 11:42:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:16.634 11:42:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:16.634 11:42:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:16.634 11:42:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:16.634 11:42:45 -- nvmf/common.sh@294 -- # net_devs=() 00:19:16.634 11:42:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:16.634 11:42:45 -- nvmf/common.sh@295 -- # e810=() 00:19:16.634 11:42:45 -- nvmf/common.sh@295 -- # local -ga e810 00:19:16.634 11:42:45 -- nvmf/common.sh@296 -- # x722=() 00:19:16.634 11:42:45 -- nvmf/common.sh@296 -- # local -ga x722 00:19:16.634 11:42:45 -- nvmf/common.sh@297 -- # mlx=() 00:19:16.634 11:42:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:16.634 11:42:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.634 11:42:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.634 11:42:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.634 11:42:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.634 11:42:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.634 11:42:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.634 11:42:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.634 11:42:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.634 11:42:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.634 11:42:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.634 11:42:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.634 11:42:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:16.634 11:42:45 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:16.634 11:42:45 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:16.634 11:42:45 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:16.634 11:42:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:16.634 11:42:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:16.634 11:42:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:16.634 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:16.634 11:42:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:16.634 11:42:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:16.634 11:42:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:16.634 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:16.634 11:42:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:16.634 11:42:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:16.634 11:42:45 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:16.634 11:42:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.634 11:42:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:16.634 11:42:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.634 11:42:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:16.634 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:16.634 11:42:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.634 11:42:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:16.634 11:42:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.634 11:42:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:16.634 11:42:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.634 11:42:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:16.634 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:16.634 11:42:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.634 11:42:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:16.634 11:42:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:16.634 11:42:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:16.634 11:42:45 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:16.634 11:42:45 -- nvmf/common.sh@57 -- # uname 00:19:16.634 11:42:45 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:16.634 11:42:45 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:16.634 11:42:45 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:16.634 11:42:45 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:16.634 11:42:45 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:16.634 11:42:45 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:16.634 11:42:45 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:16.634 11:42:45 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:16.634 11:42:45 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:16.634 11:42:45 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:16.634 11:42:45 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:16.634 11:42:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:16.634 11:42:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:16.634 11:42:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:16.634 11:42:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:16.634 11:42:45 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:16.634 11:42:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:16.634 11:42:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.634 11:42:45 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:16.634 11:42:45 -- nvmf/common.sh@104 -- # continue 2 00:19:16.634 11:42:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:16.634 11:42:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.634 11:42:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.634 11:42:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:16.634 11:42:45 -- nvmf/common.sh@104 -- # continue 2 00:19:16.634 11:42:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:16.634 11:42:45 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:16.634 11:42:45 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:16.634 11:42:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:16.634 11:42:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:16.634 11:42:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:16.634 11:42:45 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:16.634 11:42:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:16.634 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:16.634 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:16.634 altname enp217s0f0np0 00:19:16.634 altname ens818f0np0 00:19:16.634 inet 192.168.100.8/24 scope global mlx_0_0 00:19:16.634 valid_lft forever preferred_lft forever 00:19:16.634 11:42:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:16.634 11:42:45 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:16.634 11:42:45 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:16.634 11:42:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:16.634 11:42:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:16.634 11:42:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:16.634 11:42:45 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:16.634 11:42:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:16.634 11:42:45 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:16.634 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:16.635 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:16.635 altname enp217s0f1np1 00:19:16.635 altname ens818f1np1 00:19:16.635 inet 192.168.100.9/24 scope global mlx_0_1 00:19:16.635 valid_lft forever preferred_lft forever 00:19:16.635 11:42:45 -- nvmf/common.sh@410 -- # return 0 00:19:16.635 11:42:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:16.635 11:42:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:16.635 11:42:45 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:16.635 11:42:45 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:16.635 11:42:45 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:16.635 11:42:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:16.635 11:42:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:16.635 11:42:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:16.635 11:42:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:16.635 11:42:45 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:16.635 11:42:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:16.635 11:42:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.635 11:42:45 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:16.635 11:42:45 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:16.635 11:42:45 -- nvmf/common.sh@104 -- # continue 2 00:19:16.635 11:42:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:16.635 11:42:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.635 11:42:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:16.635 11:42:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.635 11:42:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:16.635 11:42:45 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:16.635 11:42:45 -- nvmf/common.sh@104 -- # continue 2 00:19:16.635 11:42:45 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:16.635 11:42:45 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:16.635 11:42:45 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:16.635 11:42:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:16.635 11:42:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:16.635 11:42:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:16.635 11:42:45 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:16.635 11:42:45 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:16.635 11:42:45 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:16.635 11:42:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:16.635 11:42:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:16.635 11:42:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:16.635 11:42:46 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:16.635 192.168.100.9' 00:19:16.635 11:42:46 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:16.635 192.168.100.9' 00:19:16.635 11:42:46 -- nvmf/common.sh@445 -- # head -n 1 00:19:16.635 11:42:46 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:16.635 11:42:46 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:16.635 192.168.100.9' 00:19:16.635 11:42:46 -- nvmf/common.sh@446 -- # tail -n +2 00:19:16.635 11:42:46 -- nvmf/common.sh@446 -- # head -n 1 00:19:16.635 11:42:46 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:16.635 11:42:46 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:16.635 11:42:46 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:16.635 11:42:46 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:16.635 11:42:46 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:16.635 11:42:46 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:16.635 11:42:46 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:19:16.635 11:42:46 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:19:16.635 11:42:46 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:19:16.635 run this test only with TCP transport for now 00:19:16.635 11:42:46 -- target/multipath.sh@53 -- # nvmftestfini 00:19:16.635 11:42:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:16.635 11:42:46 -- nvmf/common.sh@116 -- # sync 00:19:16.635 11:42:46 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:16.635 11:42:46 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:16.635 11:42:46 -- nvmf/common.sh@119 -- # set +e 00:19:16.635 11:42:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:16.635 11:42:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:16.635 rmmod nvme_rdma 00:19:16.894 rmmod nvme_fabrics 00:19:16.894 11:42:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:16.894 11:42:46 -- nvmf/common.sh@123 -- # set -e 00:19:16.894 11:42:46 -- nvmf/common.sh@124 -- # return 0 00:19:16.894 11:42:46 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:16.894 11:42:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:16.894 11:42:46 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:16.894 11:42:46 -- target/multipath.sh@54 -- # exit 0 00:19:16.894 11:42:46 -- target/multipath.sh@1 -- # nvmftestfini 00:19:16.894 11:42:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:16.894 11:42:46 -- nvmf/common.sh@116 -- # sync 00:19:16.894 11:42:46 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:16.894 11:42:46 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:16.894 11:42:46 -- nvmf/common.sh@119 -- # set +e 00:19:16.894 11:42:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:16.894 11:42:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:16.894 11:42:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:16.894 11:42:46 -- nvmf/common.sh@123 -- # set -e 00:19:16.894 11:42:46 -- nvmf/common.sh@124 -- # return 0 00:19:16.894 11:42:46 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:16.894 11:42:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:16.894 11:42:46 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:16.894 00:19:16.894 real 0m8.379s 00:19:16.894 user 0m2.387s 00:19:16.894 sys 0m6.231s 00:19:16.894 11:42:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:16.894 11:42:46 -- common/autotest_common.sh@10 -- # set +x 00:19:16.894 ************************************ 00:19:16.894 END TEST nvmf_multipath 00:19:16.895 ************************************ 00:19:16.895 11:42:46 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:16.895 11:42:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:16.895 11:42:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:16.895 11:42:46 -- common/autotest_common.sh@10 -- # set +x 00:19:16.895 ************************************ 00:19:16.895 START TEST nvmf_zcopy 00:19:16.895 ************************************ 00:19:16.895 11:42:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:16.895 * Looking for test storage... 00:19:16.895 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:16.895 11:42:46 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.895 11:42:46 -- nvmf/common.sh@7 -- # uname -s 00:19:16.895 11:42:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.895 11:42:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.895 11:42:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.895 11:42:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.895 11:42:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.895 11:42:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.895 11:42:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.895 11:42:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.895 11:42:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.895 11:42:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.895 11:42:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:16.895 11:42:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:16.895 11:42:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.895 11:42:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.895 11:42:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.895 11:42:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:16.895 11:42:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.895 11:42:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.895 11:42:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.895 11:42:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.895 11:42:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.895 11:42:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.895 11:42:46 -- paths/export.sh@5 -- # export PATH 00:19:16.895 11:42:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.895 11:42:46 -- nvmf/common.sh@46 -- # : 0 00:19:16.895 11:42:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:16.895 11:42:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:16.895 11:42:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:16.895 11:42:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.895 11:42:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.895 11:42:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:16.895 11:42:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:16.895 11:42:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:16.895 11:42:46 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:16.895 11:42:46 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:16.895 11:42:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.895 11:42:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:16.895 11:42:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:16.895 11:42:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:16.895 11:42:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.895 11:42:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.895 11:42:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.895 11:42:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:16.895 11:42:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:16.895 11:42:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:16.895 11:42:46 -- common/autotest_common.sh@10 -- # set +x 00:19:25.011 11:42:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:25.011 11:42:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:25.011 11:42:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:25.011 11:42:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:25.011 11:42:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:25.011 11:42:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:25.011 11:42:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:25.011 11:42:54 -- nvmf/common.sh@294 -- # net_devs=() 00:19:25.011 11:42:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:25.011 11:42:54 -- nvmf/common.sh@295 -- # e810=() 00:19:25.011 11:42:54 -- nvmf/common.sh@295 -- # local -ga e810 00:19:25.011 11:42:54 -- nvmf/common.sh@296 -- # x722=() 00:19:25.011 11:42:54 -- nvmf/common.sh@296 -- # local -ga x722 00:19:25.011 11:42:54 -- nvmf/common.sh@297 -- # mlx=() 00:19:25.011 11:42:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:25.011 11:42:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.011 11:42:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.011 11:42:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.011 11:42:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.011 11:42:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.011 11:42:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.011 11:42:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.011 11:42:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.011 11:42:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.011 11:42:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.011 11:42:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.011 11:42:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:25.011 11:42:54 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:25.011 11:42:54 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:25.011 11:42:54 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:25.011 11:42:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:25.011 11:42:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:25.011 11:42:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:25.011 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:25.011 11:42:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:25.011 11:42:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:25.011 11:42:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:25.011 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:25.011 11:42:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:25.011 11:42:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:25.011 11:42:54 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:25.011 11:42:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.011 11:42:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:25.011 11:42:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.011 11:42:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:25.011 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:25.011 11:42:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.011 11:42:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:25.011 11:42:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.011 11:42:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:25.011 11:42:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.011 11:42:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:25.011 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:25.011 11:42:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.011 11:42:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:25.011 11:42:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:25.011 11:42:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:25.011 11:42:54 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:25.011 11:42:54 -- nvmf/common.sh@57 -- # uname 00:19:25.011 11:42:54 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:25.011 11:42:54 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:25.011 11:42:54 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:25.011 11:42:54 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:25.011 11:42:54 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:25.011 11:42:54 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:25.011 11:42:54 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:25.011 11:42:54 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:25.011 11:42:54 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:25.011 11:42:54 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:25.011 11:42:54 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:25.011 11:42:54 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:25.011 11:42:54 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:25.011 11:42:54 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:25.011 11:42:54 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:25.011 11:42:54 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:25.011 11:42:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:25.011 11:42:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.011 11:42:54 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:25.011 11:42:54 -- nvmf/common.sh@104 -- # continue 2 00:19:25.011 11:42:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:25.011 11:42:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.011 11:42:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.011 11:42:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:25.011 11:42:54 -- nvmf/common.sh@104 -- # continue 2 00:19:25.011 11:42:54 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:25.011 11:42:54 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:25.011 11:42:54 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:25.011 11:42:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:25.011 11:42:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:25.011 11:42:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:25.011 11:42:54 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:25.011 11:42:54 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:25.011 11:42:54 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:25.270 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:25.270 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:25.270 altname enp217s0f0np0 00:19:25.270 altname ens818f0np0 00:19:25.270 inet 192.168.100.8/24 scope global mlx_0_0 00:19:25.270 valid_lft forever preferred_lft forever 00:19:25.270 11:42:54 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:25.270 11:42:54 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:25.270 11:42:54 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:25.270 11:42:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:25.270 11:42:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:25.270 11:42:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:25.270 11:42:54 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:25.270 11:42:54 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:25.270 11:42:54 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:25.270 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:25.270 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:25.270 altname enp217s0f1np1 00:19:25.270 altname ens818f1np1 00:19:25.270 inet 192.168.100.9/24 scope global mlx_0_1 00:19:25.270 valid_lft forever preferred_lft forever 00:19:25.270 11:42:54 -- nvmf/common.sh@410 -- # return 0 00:19:25.270 11:42:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:25.270 11:42:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:25.270 11:42:54 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:25.270 11:42:54 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:25.270 11:42:54 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:25.270 11:42:54 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:25.270 11:42:54 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:25.270 11:42:54 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:25.270 11:42:54 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:25.270 11:42:54 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:25.270 11:42:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:25.270 11:42:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.270 11:42:54 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:25.270 11:42:54 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:25.270 11:42:54 -- nvmf/common.sh@104 -- # continue 2 00:19:25.270 11:42:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:25.270 11:42:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.270 11:42:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:25.270 11:42:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.270 11:42:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:25.270 11:42:54 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:25.270 11:42:54 -- nvmf/common.sh@104 -- # continue 2 00:19:25.270 11:42:54 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:25.270 11:42:54 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:25.270 11:42:54 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:25.270 11:42:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:25.270 11:42:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:25.270 11:42:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:25.270 11:42:54 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:25.270 11:42:54 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:25.270 11:42:54 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:25.270 11:42:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:25.270 11:42:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:25.270 11:42:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:25.270 11:42:54 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:25.270 192.168.100.9' 00:19:25.270 11:42:54 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:25.270 192.168.100.9' 00:19:25.270 11:42:54 -- nvmf/common.sh@445 -- # head -n 1 00:19:25.270 11:42:54 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:25.270 11:42:54 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:25.270 192.168.100.9' 00:19:25.270 11:42:54 -- nvmf/common.sh@446 -- # tail -n +2 00:19:25.270 11:42:54 -- nvmf/common.sh@446 -- # head -n 1 00:19:25.270 11:42:54 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:25.270 11:42:54 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:25.270 11:42:54 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:25.270 11:42:54 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:25.270 11:42:54 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:25.270 11:42:54 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:25.270 11:42:54 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:25.270 11:42:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:25.270 11:42:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:25.270 11:42:54 -- common/autotest_common.sh@10 -- # set +x 00:19:25.270 11:42:54 -- nvmf/common.sh@469 -- # nvmfpid=2377457 00:19:25.270 11:42:54 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:25.270 11:42:54 -- nvmf/common.sh@470 -- # waitforlisten 2377457 00:19:25.270 11:42:54 -- common/autotest_common.sh@819 -- # '[' -z 2377457 ']' 00:19:25.271 11:42:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.271 11:42:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:25.271 11:42:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.271 11:42:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:25.271 11:42:54 -- common/autotest_common.sh@10 -- # set +x 00:19:25.271 [2024-07-21 11:42:54.610476] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:25.271 [2024-07-21 11:42:54.610527] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.271 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.529 [2024-07-21 11:42:54.695865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.529 [2024-07-21 11:42:54.731692] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:25.529 [2024-07-21 11:42:54.731804] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.529 [2024-07-21 11:42:54.731814] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.529 [2024-07-21 11:42:54.731823] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.529 [2024-07-21 11:42:54.731843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.095 11:42:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:26.095 11:42:55 -- common/autotest_common.sh@852 -- # return 0 00:19:26.095 11:42:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:26.095 11:42:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:26.095 11:42:55 -- common/autotest_common.sh@10 -- # set +x 00:19:26.095 11:42:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.095 11:42:55 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:19:26.095 11:42:55 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:19:26.095 Unsupported transport: rdma 00:19:26.095 11:42:55 -- target/zcopy.sh@17 -- # exit 0 00:19:26.095 11:42:55 -- target/zcopy.sh@1 -- # process_shm --id 0 00:19:26.095 11:42:55 -- common/autotest_common.sh@796 -- # type=--id 00:19:26.095 11:42:55 -- common/autotest_common.sh@797 -- # id=0 00:19:26.095 11:42:55 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:19:26.095 11:42:55 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:26.095 11:42:55 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:19:26.095 11:42:55 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:19:26.095 11:42:55 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:19:26.095 11:42:55 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:26.095 nvmf_trace.0 00:19:26.095 11:42:55 -- common/autotest_common.sh@811 -- # return 0 00:19:26.095 11:42:55 -- target/zcopy.sh@1 -- # nvmftestfini 00:19:26.095 11:42:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:26.095 11:42:55 -- nvmf/common.sh@116 -- # sync 00:19:26.095 11:42:55 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:26.095 11:42:55 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:26.095 11:42:55 -- nvmf/common.sh@119 -- # set +e 00:19:26.095 11:42:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:26.095 11:42:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:26.095 rmmod nvme_rdma 00:19:26.095 rmmod nvme_fabrics 00:19:26.354 11:42:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:26.354 11:42:55 -- nvmf/common.sh@123 -- # set -e 00:19:26.354 11:42:55 -- nvmf/common.sh@124 -- # return 0 00:19:26.354 11:42:55 -- nvmf/common.sh@477 -- # '[' -n 2377457 ']' 00:19:26.354 11:42:55 -- nvmf/common.sh@478 -- # killprocess 2377457 00:19:26.354 11:42:55 -- common/autotest_common.sh@926 -- # '[' -z 2377457 ']' 00:19:26.354 11:42:55 -- common/autotest_common.sh@930 -- # kill -0 2377457 00:19:26.354 11:42:55 -- common/autotest_common.sh@931 -- # uname 00:19:26.354 11:42:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:26.354 11:42:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2377457 00:19:26.354 11:42:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:26.354 11:42:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:26.354 11:42:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2377457' 00:19:26.354 killing process with pid 2377457 00:19:26.354 11:42:55 -- common/autotest_common.sh@945 -- # kill 2377457 00:19:26.354 11:42:55 -- common/autotest_common.sh@950 -- # wait 2377457 00:19:26.354 11:42:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:26.354 11:42:55 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:26.354 00:19:26.354 real 0m9.618s 00:19:26.354 user 0m3.672s 00:19:26.354 sys 0m6.693s 00:19:26.354 11:42:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:26.354 11:42:55 -- common/autotest_common.sh@10 -- # set +x 00:19:26.354 ************************************ 00:19:26.354 END TEST nvmf_zcopy 00:19:26.354 ************************************ 00:19:26.614 11:42:55 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:26.614 11:42:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:26.614 11:42:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:26.614 11:42:55 -- common/autotest_common.sh@10 -- # set +x 00:19:26.614 ************************************ 00:19:26.614 START TEST nvmf_nmic 00:19:26.614 ************************************ 00:19:26.614 11:42:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:26.614 * Looking for test storage... 00:19:26.614 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:26.614 11:42:55 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.614 11:42:55 -- nvmf/common.sh@7 -- # uname -s 00:19:26.614 11:42:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.614 11:42:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.614 11:42:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.614 11:42:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.614 11:42:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.614 11:42:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.614 11:42:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.614 11:42:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.614 11:42:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.615 11:42:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.615 11:42:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:26.615 11:42:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:26.615 11:42:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.615 11:42:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.615 11:42:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.615 11:42:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:26.615 11:42:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.615 11:42:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.615 11:42:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.615 11:42:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.615 11:42:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.615 11:42:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.615 11:42:55 -- paths/export.sh@5 -- # export PATH 00:19:26.615 11:42:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.615 11:42:55 -- nvmf/common.sh@46 -- # : 0 00:19:26.615 11:42:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:26.615 11:42:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:26.615 11:42:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:26.615 11:42:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.615 11:42:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.615 11:42:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:26.615 11:42:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:26.615 11:42:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:26.615 11:42:55 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:26.615 11:42:55 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:26.615 11:42:55 -- target/nmic.sh@14 -- # nvmftestinit 00:19:26.615 11:42:55 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:26.615 11:42:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.615 11:42:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:26.615 11:42:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:26.615 11:42:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:26.615 11:42:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.615 11:42:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.615 11:42:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.615 11:42:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:26.615 11:42:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:26.615 11:42:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:26.615 11:42:55 -- common/autotest_common.sh@10 -- # set +x 00:19:34.726 11:43:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:34.726 11:43:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:34.726 11:43:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:34.726 11:43:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:34.726 11:43:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:34.726 11:43:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:34.726 11:43:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:34.726 11:43:04 -- nvmf/common.sh@294 -- # net_devs=() 00:19:34.726 11:43:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:34.726 11:43:04 -- nvmf/common.sh@295 -- # e810=() 00:19:34.726 11:43:04 -- nvmf/common.sh@295 -- # local -ga e810 00:19:34.726 11:43:04 -- nvmf/common.sh@296 -- # x722=() 00:19:34.726 11:43:04 -- nvmf/common.sh@296 -- # local -ga x722 00:19:34.726 11:43:04 -- nvmf/common.sh@297 -- # mlx=() 00:19:34.726 11:43:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:34.726 11:43:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:34.726 11:43:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:34.726 11:43:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:34.726 11:43:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:34.726 11:43:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:34.726 11:43:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:34.726 11:43:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:34.726 11:43:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:34.726 11:43:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:34.726 11:43:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:34.726 11:43:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:34.726 11:43:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:34.726 11:43:04 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:34.726 11:43:04 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:34.726 11:43:04 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:34.726 11:43:04 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:34.726 11:43:04 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:34.726 11:43:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:34.726 11:43:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:34.726 11:43:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:34.726 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:34.726 11:43:04 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:34.726 11:43:04 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:34.726 11:43:04 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:34.726 11:43:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:34.726 11:43:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:34.726 11:43:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:34.726 11:43:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:34.726 11:43:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:34.726 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:34.726 11:43:04 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:34.726 11:43:04 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:34.726 11:43:04 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:34.726 11:43:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:34.726 11:43:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:34.726 11:43:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:34.726 11:43:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:34.726 11:43:04 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:34.726 11:43:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:34.726 11:43:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.726 11:43:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:34.726 11:43:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.726 11:43:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:34.726 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:34.726 11:43:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.726 11:43:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:34.727 11:43:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.727 11:43:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:34.727 11:43:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.727 11:43:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:34.727 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:34.727 11:43:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.727 11:43:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:34.727 11:43:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:34.727 11:43:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:34.727 11:43:04 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:34.727 11:43:04 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:34.727 11:43:04 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:34.727 11:43:04 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:34.727 11:43:04 -- nvmf/common.sh@57 -- # uname 00:19:34.727 11:43:04 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:34.727 11:43:04 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:34.727 11:43:04 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:34.727 11:43:04 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:34.727 11:43:04 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:34.727 11:43:04 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:34.727 11:43:04 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:34.727 11:43:04 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:34.727 11:43:04 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:34.727 11:43:04 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:34.727 11:43:04 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:34.727 11:43:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:34.986 11:43:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:34.986 11:43:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:34.986 11:43:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:34.986 11:43:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:34.986 11:43:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:34.986 11:43:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.986 11:43:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:34.986 11:43:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:34.986 11:43:04 -- nvmf/common.sh@104 -- # continue 2 00:19:34.986 11:43:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:34.986 11:43:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.986 11:43:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:34.986 11:43:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.986 11:43:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:34.986 11:43:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:34.986 11:43:04 -- nvmf/common.sh@104 -- # continue 2 00:19:34.986 11:43:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:34.986 11:43:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:34.986 11:43:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:34.986 11:43:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:34.986 11:43:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:34.986 11:43:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:34.986 11:43:04 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:34.986 11:43:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:34.986 11:43:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:34.986 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:34.986 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:34.986 altname enp217s0f0np0 00:19:34.986 altname ens818f0np0 00:19:34.986 inet 192.168.100.8/24 scope global mlx_0_0 00:19:34.986 valid_lft forever preferred_lft forever 00:19:34.986 11:43:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:34.986 11:43:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:34.986 11:43:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:34.986 11:43:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:34.986 11:43:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:34.986 11:43:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:34.986 11:43:04 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:34.986 11:43:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:34.986 11:43:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:34.986 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:34.986 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:34.986 altname enp217s0f1np1 00:19:34.986 altname ens818f1np1 00:19:34.986 inet 192.168.100.9/24 scope global mlx_0_1 00:19:34.986 valid_lft forever preferred_lft forever 00:19:34.986 11:43:04 -- nvmf/common.sh@410 -- # return 0 00:19:34.986 11:43:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:34.986 11:43:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:34.986 11:43:04 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:34.986 11:43:04 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:34.986 11:43:04 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:34.986 11:43:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:34.986 11:43:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:34.986 11:43:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:34.986 11:43:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:34.986 11:43:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:34.986 11:43:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:34.986 11:43:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.986 11:43:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:34.986 11:43:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:34.986 11:43:04 -- nvmf/common.sh@104 -- # continue 2 00:19:34.986 11:43:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:34.986 11:43:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.986 11:43:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:34.986 11:43:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.986 11:43:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:34.986 11:43:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:34.986 11:43:04 -- nvmf/common.sh@104 -- # continue 2 00:19:34.986 11:43:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:34.986 11:43:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:34.986 11:43:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:34.986 11:43:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:34.986 11:43:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:34.986 11:43:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:34.986 11:43:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:34.986 11:43:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:34.986 11:43:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:34.986 11:43:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:34.986 11:43:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:34.986 11:43:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:34.986 11:43:04 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:34.986 192.168.100.9' 00:19:34.986 11:43:04 -- nvmf/common.sh@445 -- # head -n 1 00:19:34.986 11:43:04 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:34.986 192.168.100.9' 00:19:34.986 11:43:04 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:34.986 11:43:04 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:34.986 192.168.100.9' 00:19:34.986 11:43:04 -- nvmf/common.sh@446 -- # tail -n +2 00:19:34.986 11:43:04 -- nvmf/common.sh@446 -- # head -n 1 00:19:34.986 11:43:04 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:34.986 11:43:04 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:34.986 11:43:04 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:34.986 11:43:04 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:34.986 11:43:04 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:34.986 11:43:04 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:34.986 11:43:04 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:34.986 11:43:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:34.986 11:43:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:34.986 11:43:04 -- common/autotest_common.sh@10 -- # set +x 00:19:34.986 11:43:04 -- nvmf/common.sh@469 -- # nvmfpid=2381666 00:19:34.986 11:43:04 -- nvmf/common.sh@470 -- # waitforlisten 2381666 00:19:34.986 11:43:04 -- common/autotest_common.sh@819 -- # '[' -z 2381666 ']' 00:19:34.986 11:43:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.986 11:43:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:34.986 11:43:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.986 11:43:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:34.986 11:43:04 -- common/autotest_common.sh@10 -- # set +x 00:19:34.986 11:43:04 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:34.986 [2024-07-21 11:43:04.383949] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:34.986 [2024-07-21 11:43:04.384001] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.244 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.244 [2024-07-21 11:43:04.470991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:35.244 [2024-07-21 11:43:04.509899] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:35.244 [2024-07-21 11:43:04.510020] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.244 [2024-07-21 11:43:04.510031] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.244 [2024-07-21 11:43:04.510040] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.244 [2024-07-21 11:43:04.513643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.244 [2024-07-21 11:43:04.513680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:35.244 [2024-07-21 11:43:04.513682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.244 [2024-07-21 11:43:04.513660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.808 11:43:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:35.808 11:43:05 -- common/autotest_common.sh@852 -- # return 0 00:19:35.808 11:43:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:35.808 11:43:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:35.808 11:43:05 -- common/autotest_common.sh@10 -- # set +x 00:19:35.808 11:43:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.067 11:43:05 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:36.067 11:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.067 11:43:05 -- common/autotest_common.sh@10 -- # set +x 00:19:36.067 [2024-07-21 11:43:05.259636] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x152e4b0/0x15329a0) succeed. 00:19:36.067 [2024-07-21 11:43:05.270611] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x152faa0/0x1574030) succeed. 00:19:36.067 11:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.067 11:43:05 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:36.067 11:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.067 11:43:05 -- common/autotest_common.sh@10 -- # set +x 00:19:36.067 Malloc0 00:19:36.067 11:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.067 11:43:05 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:36.067 11:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.067 11:43:05 -- common/autotest_common.sh@10 -- # set +x 00:19:36.067 11:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.067 11:43:05 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:36.067 11:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.067 11:43:05 -- common/autotest_common.sh@10 -- # set +x 00:19:36.067 11:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.067 11:43:05 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:36.067 11:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.067 11:43:05 -- common/autotest_common.sh@10 -- # set +x 00:19:36.067 [2024-07-21 11:43:05.438502] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:36.067 11:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.067 11:43:05 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:36.067 test case1: single bdev can't be used in multiple subsystems 00:19:36.067 11:43:05 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:36.067 11:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.067 11:43:05 -- common/autotest_common.sh@10 -- # set +x 00:19:36.067 11:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.067 11:43:05 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:19:36.067 11:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.067 11:43:05 -- common/autotest_common.sh@10 -- # set +x 00:19:36.067 11:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.067 11:43:05 -- target/nmic.sh@28 -- # nmic_status=0 00:19:36.067 11:43:05 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:36.067 11:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.067 11:43:05 -- common/autotest_common.sh@10 -- # set +x 00:19:36.067 [2024-07-21 11:43:05.462258] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:36.067 [2024-07-21 11:43:05.462277] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:36.067 [2024-07-21 11:43:05.462287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:36.067 request: 00:19:36.067 { 00:19:36.067 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:36.067 "namespace": { 00:19:36.067 "bdev_name": "Malloc0" 00:19:36.067 }, 00:19:36.067 "method": "nvmf_subsystem_add_ns", 00:19:36.067 "req_id": 1 00:19:36.067 } 00:19:36.067 Got JSON-RPC error response 00:19:36.067 response: 00:19:36.067 { 00:19:36.067 "code": -32602, 00:19:36.067 "message": "Invalid parameters" 00:19:36.067 } 00:19:36.067 11:43:05 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:36.067 11:43:05 -- target/nmic.sh@29 -- # nmic_status=1 00:19:36.067 11:43:05 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:36.067 11:43:05 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:36.067 Adding namespace failed - expected result. 00:19:36.067 11:43:05 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:36.067 test case2: host connect to nvmf target in multiple paths 00:19:36.067 11:43:05 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:36.067 11:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.067 11:43:05 -- common/autotest_common.sh@10 -- # set +x 00:19:36.067 [2024-07-21 11:43:05.474345] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:36.067 11:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.067 11:43:05 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:37.461 11:43:06 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:19:38.028 11:43:07 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:38.028 11:43:07 -- common/autotest_common.sh@1177 -- # local i=0 00:19:38.028 11:43:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:38.028 11:43:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:38.028 11:43:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:40.562 11:43:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:40.562 11:43:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:40.562 11:43:09 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:40.562 11:43:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:40.562 11:43:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:40.562 11:43:09 -- common/autotest_common.sh@1187 -- # return 0 00:19:40.562 11:43:09 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:40.562 [global] 00:19:40.562 thread=1 00:19:40.562 invalidate=1 00:19:40.562 rw=write 00:19:40.562 time_based=1 00:19:40.562 runtime=1 00:19:40.562 ioengine=libaio 00:19:40.562 direct=1 00:19:40.562 bs=4096 00:19:40.562 iodepth=1 00:19:40.562 norandommap=0 00:19:40.562 numjobs=1 00:19:40.562 00:19:40.562 verify_dump=1 00:19:40.562 verify_backlog=512 00:19:40.562 verify_state_save=0 00:19:40.562 do_verify=1 00:19:40.562 verify=crc32c-intel 00:19:40.562 [job0] 00:19:40.562 filename=/dev/nvme0n1 00:19:40.562 Could not set queue depth (nvme0n1) 00:19:40.562 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:40.562 fio-3.35 00:19:40.562 Starting 1 thread 00:19:41.936 00:19:41.936 job0: (groupid=0, jobs=1): err= 0: pid=2382799: Sun Jul 21 11:43:10 2024 00:19:41.936 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:19:41.936 slat (nsec): min=8300, max=34236, avg=8966.47, stdev=975.83 00:19:41.936 clat (usec): min=41, max=101, avg=58.97, stdev= 3.63 00:19:41.936 lat (usec): min=58, max=135, avg=67.94, stdev= 3.76 00:19:41.936 clat percentiles (usec): 00:19:41.936 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 56], 00:19:41.936 | 30.00th=[ 58], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 60], 00:19:41.936 | 70.00th=[ 61], 80.00th=[ 63], 90.00th=[ 64], 95.00th=[ 66], 00:19:41.936 | 99.00th=[ 69], 99.50th=[ 70], 99.90th=[ 74], 99.95th=[ 77], 00:19:41.936 | 99.99th=[ 102] 00:19:41.936 write: IOPS=7177, BW=28.0MiB/s (29.4MB/s)(28.1MiB/1001msec); 0 zone resets 00:19:41.936 slat (nsec): min=10019, max=47821, avg=10688.63, stdev=1203.87 00:19:41.936 clat (usec): min=33, max=206, avg=56.82, stdev= 5.25 00:19:41.936 lat (usec): min=57, max=216, avg=67.51, stdev= 5.36 00:19:41.936 clat percentiles (usec): 00:19:41.936 | 1.00th=[ 50], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 54], 00:19:41.936 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:19:41.936 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 62], 95.00th=[ 64], 00:19:41.936 | 99.00th=[ 68], 99.50th=[ 72], 99.90th=[ 115], 99.95th=[ 147], 00:19:41.936 | 99.99th=[ 206] 00:19:41.936 bw ( KiB/s): min=28864, max=28864, per=100.00%, avg=28864.00, stdev= 0.00, samples=1 00:19:41.936 iops : min= 7216, max= 7216, avg=7216.00, stdev= 0.00, samples=1 00:19:41.936 lat (usec) : 50=1.25%, 100=98.66%, 250=0.09% 00:19:41.936 cpu : usr=10.20%, sys=18.60%, ctx=14353, majf=0, minf=2 00:19:41.936 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:41.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.936 issued rwts: total=7168,7185,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.936 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:41.936 00:19:41.936 Run status group 0 (all jobs): 00:19:41.936 READ: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:19:41.936 WRITE: bw=28.0MiB/s (29.4MB/s), 28.0MiB/s-28.0MiB/s (29.4MB/s-29.4MB/s), io=28.1MiB (29.4MB), run=1001-1001msec 00:19:41.936 00:19:41.936 Disk stats (read/write): 00:19:41.936 nvme0n1: ios=6290/6656, merge=0/0, ticks=320/326, in_queue=646, util=90.58% 00:19:41.936 11:43:10 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:43.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:43.832 11:43:12 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:43.832 11:43:12 -- common/autotest_common.sh@1198 -- # local i=0 00:19:43.832 11:43:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:43.832 11:43:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:43.832 11:43:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:43.832 11:43:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:43.832 11:43:12 -- common/autotest_common.sh@1210 -- # return 0 00:19:43.832 11:43:12 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:43.832 11:43:12 -- target/nmic.sh@53 -- # nvmftestfini 00:19:43.832 11:43:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:43.832 11:43:12 -- nvmf/common.sh@116 -- # sync 00:19:43.832 11:43:12 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:43.832 11:43:12 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:43.832 11:43:12 -- nvmf/common.sh@119 -- # set +e 00:19:43.832 11:43:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:43.832 11:43:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:43.832 rmmod nvme_rdma 00:19:43.832 rmmod nvme_fabrics 00:19:43.832 11:43:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:43.832 11:43:12 -- nvmf/common.sh@123 -- # set -e 00:19:43.832 11:43:12 -- nvmf/common.sh@124 -- # return 0 00:19:43.832 11:43:12 -- nvmf/common.sh@477 -- # '[' -n 2381666 ']' 00:19:43.832 11:43:12 -- nvmf/common.sh@478 -- # killprocess 2381666 00:19:43.832 11:43:12 -- common/autotest_common.sh@926 -- # '[' -z 2381666 ']' 00:19:43.833 11:43:12 -- common/autotest_common.sh@930 -- # kill -0 2381666 00:19:43.833 11:43:12 -- common/autotest_common.sh@931 -- # uname 00:19:43.833 11:43:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:43.833 11:43:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2381666 00:19:43.833 11:43:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:43.833 11:43:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:43.833 11:43:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2381666' 00:19:43.833 killing process with pid 2381666 00:19:43.833 11:43:12 -- common/autotest_common.sh@945 -- # kill 2381666 00:19:43.833 11:43:12 -- common/autotest_common.sh@950 -- # wait 2381666 00:19:44.091 11:43:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:44.091 11:43:13 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:44.091 00:19:44.091 real 0m17.456s 00:19:44.091 user 0m45.364s 00:19:44.091 sys 0m7.424s 00:19:44.091 11:43:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:44.091 11:43:13 -- common/autotest_common.sh@10 -- # set +x 00:19:44.091 ************************************ 00:19:44.091 END TEST nvmf_nmic 00:19:44.091 ************************************ 00:19:44.091 11:43:13 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:44.091 11:43:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:44.091 11:43:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:44.091 11:43:13 -- common/autotest_common.sh@10 -- # set +x 00:19:44.091 ************************************ 00:19:44.091 START TEST nvmf_fio_target 00:19:44.091 ************************************ 00:19:44.091 11:43:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:44.091 * Looking for test storage... 00:19:44.091 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:44.091 11:43:13 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:44.091 11:43:13 -- nvmf/common.sh@7 -- # uname -s 00:19:44.091 11:43:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.091 11:43:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.091 11:43:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.091 11:43:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.091 11:43:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.091 11:43:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.091 11:43:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.091 11:43:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.091 11:43:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.091 11:43:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.091 11:43:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:44.091 11:43:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:44.091 11:43:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.091 11:43:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.091 11:43:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:44.091 11:43:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:44.091 11:43:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.091 11:43:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.091 11:43:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.091 11:43:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.091 11:43:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.091 11:43:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.091 11:43:13 -- paths/export.sh@5 -- # export PATH 00:19:44.091 11:43:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.091 11:43:13 -- nvmf/common.sh@46 -- # : 0 00:19:44.091 11:43:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:44.091 11:43:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:44.091 11:43:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:44.091 11:43:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.091 11:43:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.091 11:43:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:44.091 11:43:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:44.091 11:43:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:44.091 11:43:13 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:44.091 11:43:13 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:44.091 11:43:13 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:44.091 11:43:13 -- target/fio.sh@16 -- # nvmftestinit 00:19:44.091 11:43:13 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:44.091 11:43:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.091 11:43:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:44.091 11:43:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:44.091 11:43:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:44.091 11:43:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.091 11:43:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.091 11:43:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.091 11:43:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:44.091 11:43:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:44.091 11:43:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:44.091 11:43:13 -- common/autotest_common.sh@10 -- # set +x 00:19:52.198 11:43:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:52.198 11:43:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:52.198 11:43:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:52.198 11:43:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:52.198 11:43:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:52.198 11:43:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:52.198 11:43:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:52.198 11:43:21 -- nvmf/common.sh@294 -- # net_devs=() 00:19:52.198 11:43:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:52.198 11:43:21 -- nvmf/common.sh@295 -- # e810=() 00:19:52.198 11:43:21 -- nvmf/common.sh@295 -- # local -ga e810 00:19:52.198 11:43:21 -- nvmf/common.sh@296 -- # x722=() 00:19:52.198 11:43:21 -- nvmf/common.sh@296 -- # local -ga x722 00:19:52.198 11:43:21 -- nvmf/common.sh@297 -- # mlx=() 00:19:52.198 11:43:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:52.198 11:43:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.198 11:43:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.198 11:43:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.198 11:43:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.198 11:43:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.198 11:43:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.198 11:43:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.198 11:43:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.198 11:43:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.198 11:43:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.198 11:43:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.198 11:43:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:52.198 11:43:21 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:52.198 11:43:21 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:52.198 11:43:21 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:52.198 11:43:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:52.198 11:43:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:52.198 11:43:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:52.198 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:52.198 11:43:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:52.198 11:43:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:52.198 11:43:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:52.198 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:52.198 11:43:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:52.198 11:43:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:52.198 11:43:21 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:52.198 11:43:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.198 11:43:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:52.198 11:43:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.198 11:43:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:52.198 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:52.198 11:43:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.198 11:43:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:52.198 11:43:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.198 11:43:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:52.198 11:43:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.198 11:43:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:52.198 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:52.198 11:43:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.198 11:43:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:52.198 11:43:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:52.198 11:43:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:52.198 11:43:21 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:52.198 11:43:21 -- nvmf/common.sh@57 -- # uname 00:19:52.198 11:43:21 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:52.198 11:43:21 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:52.198 11:43:21 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:52.198 11:43:21 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:52.198 11:43:21 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:52.198 11:43:21 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:52.198 11:43:21 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:52.198 11:43:21 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:52.198 11:43:21 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:52.198 11:43:21 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:52.198 11:43:21 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:52.198 11:43:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:52.198 11:43:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:52.198 11:43:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:52.198 11:43:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:52.198 11:43:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:52.198 11:43:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:52.198 11:43:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.198 11:43:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:52.198 11:43:21 -- nvmf/common.sh@104 -- # continue 2 00:19:52.198 11:43:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:52.198 11:43:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.198 11:43:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.198 11:43:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:52.198 11:43:21 -- nvmf/common.sh@104 -- # continue 2 00:19:52.198 11:43:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:52.198 11:43:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:52.198 11:43:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:52.198 11:43:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:52.198 11:43:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:52.198 11:43:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:52.198 11:43:21 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:52.198 11:43:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:52.198 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:52.198 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:52.198 altname enp217s0f0np0 00:19:52.198 altname ens818f0np0 00:19:52.198 inet 192.168.100.8/24 scope global mlx_0_0 00:19:52.198 valid_lft forever preferred_lft forever 00:19:52.198 11:43:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:52.198 11:43:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:52.198 11:43:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:52.198 11:43:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:52.198 11:43:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:52.198 11:43:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:52.198 11:43:21 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:52.198 11:43:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:52.198 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:52.198 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:52.198 altname enp217s0f1np1 00:19:52.198 altname ens818f1np1 00:19:52.198 inet 192.168.100.9/24 scope global mlx_0_1 00:19:52.198 valid_lft forever preferred_lft forever 00:19:52.198 11:43:21 -- nvmf/common.sh@410 -- # return 0 00:19:52.198 11:43:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:52.198 11:43:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:52.198 11:43:21 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:52.198 11:43:21 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:52.198 11:43:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:52.198 11:43:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:52.198 11:43:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:52.198 11:43:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:52.198 11:43:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:52.198 11:43:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:52.198 11:43:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.198 11:43:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:52.198 11:43:21 -- nvmf/common.sh@104 -- # continue 2 00:19:52.198 11:43:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:52.198 11:43:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.198 11:43:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.198 11:43:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:52.198 11:43:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:52.198 11:43:21 -- nvmf/common.sh@104 -- # continue 2 00:19:52.198 11:43:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:52.198 11:43:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:52.198 11:43:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:52.198 11:43:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:52.198 11:43:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:52.198 11:43:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:52.198 11:43:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:52.198 11:43:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:52.198 11:43:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:52.198 11:43:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:52.198 11:43:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:52.198 11:43:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:52.198 11:43:21 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:52.198 192.168.100.9' 00:19:52.198 11:43:21 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:52.198 192.168.100.9' 00:19:52.198 11:43:21 -- nvmf/common.sh@445 -- # head -n 1 00:19:52.198 11:43:21 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:52.198 11:43:21 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:52.198 192.168.100.9' 00:19:52.198 11:43:21 -- nvmf/common.sh@446 -- # tail -n +2 00:19:52.198 11:43:21 -- nvmf/common.sh@446 -- # head -n 1 00:19:52.198 11:43:21 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:52.198 11:43:21 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:52.198 11:43:21 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:52.198 11:43:21 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:52.198 11:43:21 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:52.198 11:43:21 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:52.198 11:43:21 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:52.198 11:43:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:52.198 11:43:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:52.198 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:19:52.199 11:43:21 -- nvmf/common.sh@469 -- # nvmfpid=2387381 00:19:52.199 11:43:21 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:52.199 11:43:21 -- nvmf/common.sh@470 -- # waitforlisten 2387381 00:19:52.199 11:43:21 -- common/autotest_common.sh@819 -- # '[' -z 2387381 ']' 00:19:52.199 11:43:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.199 11:43:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:52.199 11:43:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.199 11:43:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:52.199 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:19:52.457 [2024-07-21 11:43:21.651614] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:52.457 [2024-07-21 11:43:21.651680] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.457 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.457 [2024-07-21 11:43:21.741165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.457 [2024-07-21 11:43:21.779387] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:52.457 [2024-07-21 11:43:21.779516] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.457 [2024-07-21 11:43:21.779526] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.457 [2024-07-21 11:43:21.779536] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.457 [2024-07-21 11:43:21.779585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.457 [2024-07-21 11:43:21.779686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.457 [2024-07-21 11:43:21.779774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.457 [2024-07-21 11:43:21.779775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.389 11:43:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:53.389 11:43:22 -- common/autotest_common.sh@852 -- # return 0 00:19:53.389 11:43:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:53.389 11:43:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:53.389 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:19:53.389 11:43:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.389 11:43:22 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:53.390 [2024-07-21 11:43:22.687736] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x143b4b0/0x143f9a0) succeed. 00:19:53.390 [2024-07-21 11:43:22.697990] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x143caa0/0x1481030) succeed. 00:19:53.649 11:43:22 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:53.649 11:43:23 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:53.649 11:43:23 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:53.906 11:43:23 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:53.906 11:43:23 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:54.164 11:43:23 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:54.164 11:43:23 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:54.422 11:43:23 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:54.422 11:43:23 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:54.422 11:43:23 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:54.680 11:43:23 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:54.680 11:43:23 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:54.939 11:43:24 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:54.939 11:43:24 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:54.939 11:43:24 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:54.939 11:43:24 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:55.197 11:43:24 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:55.456 11:43:24 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:55.456 11:43:24 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:55.713 11:43:24 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:55.713 11:43:24 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:55.714 11:43:25 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:55.972 [2024-07-21 11:43:25.209569] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:55.972 11:43:25 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:56.230 11:43:25 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:56.230 11:43:25 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:57.166 11:43:26 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:57.166 11:43:26 -- common/autotest_common.sh@1177 -- # local i=0 00:19:57.166 11:43:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:57.166 11:43:26 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:19:57.166 11:43:26 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:19:57.166 11:43:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:59.764 11:43:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:59.764 11:43:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:59.764 11:43:28 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:59.764 11:43:28 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:19:59.764 11:43:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:59.764 11:43:28 -- common/autotest_common.sh@1187 -- # return 0 00:19:59.764 11:43:28 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:59.764 [global] 00:19:59.764 thread=1 00:19:59.764 invalidate=1 00:19:59.764 rw=write 00:19:59.764 time_based=1 00:19:59.764 runtime=1 00:19:59.764 ioengine=libaio 00:19:59.764 direct=1 00:19:59.764 bs=4096 00:19:59.764 iodepth=1 00:19:59.764 norandommap=0 00:19:59.764 numjobs=1 00:19:59.764 00:19:59.764 verify_dump=1 00:19:59.764 verify_backlog=512 00:19:59.764 verify_state_save=0 00:19:59.764 do_verify=1 00:19:59.764 verify=crc32c-intel 00:19:59.764 [job0] 00:19:59.764 filename=/dev/nvme0n1 00:19:59.764 [job1] 00:19:59.764 filename=/dev/nvme0n2 00:19:59.764 [job2] 00:19:59.764 filename=/dev/nvme0n3 00:19:59.764 [job3] 00:19:59.764 filename=/dev/nvme0n4 00:19:59.764 Could not set queue depth (nvme0n1) 00:19:59.764 Could not set queue depth (nvme0n2) 00:19:59.764 Could not set queue depth (nvme0n3) 00:19:59.764 Could not set queue depth (nvme0n4) 00:19:59.764 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:59.764 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:59.764 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:59.764 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:59.764 fio-3.35 00:19:59.764 Starting 4 threads 00:20:01.172 00:20:01.172 job0: (groupid=0, jobs=1): err= 0: pid=2388867: Sun Jul 21 11:43:30 2024 00:20:01.172 read: IOPS=4044, BW=15.8MiB/s (16.6MB/s)(15.8MiB/1001msec) 00:20:01.172 slat (nsec): min=8208, max=58875, avg=8904.40, stdev=1268.55 00:20:01.172 clat (usec): min=73, max=779, avg=112.00, stdev=21.10 00:20:01.172 lat (usec): min=81, max=803, avg=120.90, stdev=21.68 00:20:01.172 clat percentiles (usec): 00:20:01.172 | 1.00th=[ 89], 5.00th=[ 96], 10.00th=[ 99], 20.00th=[ 102], 00:20:01.172 | 30.00th=[ 105], 40.00th=[ 108], 50.00th=[ 111], 60.00th=[ 113], 00:20:01.172 | 70.00th=[ 117], 80.00th=[ 121], 90.00th=[ 128], 95.00th=[ 133], 00:20:01.172 | 99.00th=[ 151], 99.50th=[ 167], 99.90th=[ 219], 99.95th=[ 717], 00:20:01.172 | 99.99th=[ 783] 00:20:01.172 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:20:01.172 slat (nsec): min=8609, max=51610, avg=10924.71, stdev=1415.35 00:20:01.172 clat (usec): min=65, max=839, avg=109.62, stdev=25.54 00:20:01.172 lat (usec): min=74, max=890, avg=120.54, stdev=26.12 00:20:01.172 clat percentiles (usec): 00:20:01.172 | 1.00th=[ 78], 5.00th=[ 91], 10.00th=[ 95], 20.00th=[ 99], 00:20:01.172 | 30.00th=[ 102], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 111], 00:20:01.172 | 70.00th=[ 115], 80.00th=[ 119], 90.00th=[ 125], 95.00th=[ 130], 00:20:01.172 | 99.00th=[ 153], 99.50th=[ 167], 99.90th=[ 578], 99.95th=[ 685], 00:20:01.172 | 99.99th=[ 840] 00:20:01.172 bw ( KiB/s): min=16351, max=16351, per=23.92%, avg=16351.00, stdev= 0.00, samples=1 00:20:01.172 iops : min= 4087, max= 4087, avg=4087.00, stdev= 0.00, samples=1 00:20:01.172 lat (usec) : 100=17.88%, 250=82.00%, 500=0.02%, 750=0.06%, 1000=0.04% 00:20:01.172 cpu : usr=6.80%, sys=10.00%, ctx=8145, majf=0, minf=2 00:20:01.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.172 issued rwts: total=4049,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:01.172 job1: (groupid=0, jobs=1): err= 0: pid=2388879: Sun Jul 21 11:43:30 2024 00:20:01.172 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:20:01.172 slat (nsec): min=4004, max=30161, avg=8870.24, stdev=1296.76 00:20:01.172 clat (usec): min=62, max=648, avg=95.44, stdev=18.15 00:20:01.172 lat (usec): min=69, max=666, avg=104.31, stdev=18.47 00:20:01.172 clat percentiles (usec): 00:20:01.172 | 1.00th=[ 70], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 79], 00:20:01.172 | 30.00th=[ 83], 40.00th=[ 89], 50.00th=[ 99], 60.00th=[ 103], 00:20:01.172 | 70.00th=[ 106], 80.00th=[ 110], 90.00th=[ 114], 95.00th=[ 117], 00:20:01.172 | 99.00th=[ 124], 99.50th=[ 130], 99.90th=[ 210], 99.95th=[ 330], 00:20:01.172 | 99.99th=[ 652] 00:20:01.172 write: IOPS=4813, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1001msec); 0 zone resets 00:20:01.172 slat (nsec): min=3582, max=45143, avg=10787.56, stdev=2098.39 00:20:01.172 clat (usec): min=60, max=533, avg=93.12, stdev=21.58 00:20:01.172 lat (usec): min=63, max=571, avg=103.90, stdev=22.40 00:20:01.172 clat percentiles (usec): 00:20:01.172 | 1.00th=[ 67], 5.00th=[ 70], 10.00th=[ 73], 20.00th=[ 77], 00:20:01.172 | 30.00th=[ 80], 40.00th=[ 87], 50.00th=[ 96], 60.00th=[ 100], 00:20:01.172 | 70.00th=[ 104], 80.00th=[ 108], 90.00th=[ 112], 95.00th=[ 115], 00:20:01.172 | 99.00th=[ 124], 99.50th=[ 135], 99.90th=[ 433], 99.95th=[ 465], 00:20:01.172 | 99.99th=[ 537] 00:20:01.172 bw ( KiB/s): min=20480, max=20480, per=29.96%, avg=20480.00, stdev= 0.00, samples=1 00:20:01.172 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:20:01.172 lat (usec) : 100=56.06%, 250=43.79%, 500=0.13%, 750=0.02% 00:20:01.172 cpu : usr=6.90%, sys=12.00%, ctx=9426, majf=0, minf=1 00:20:01.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.172 issued rwts: total=4608,4818,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:01.172 job2: (groupid=0, jobs=1): err= 0: pid=2388899: Sun Jul 21 11:43:30 2024 00:20:01.172 read: IOPS=3818, BW=14.9MiB/s (15.6MB/s)(14.9MiB/1001msec) 00:20:01.172 slat (nsec): min=8309, max=30013, avg=9047.93, stdev=1054.50 00:20:01.172 clat (usec): min=77, max=342, avg=115.09, stdev=17.06 00:20:01.172 lat (usec): min=86, max=352, avg=124.14, stdev=17.08 00:20:01.172 clat percentiles (usec): 00:20:01.172 | 1.00th=[ 83], 5.00th=[ 88], 10.00th=[ 91], 20.00th=[ 98], 00:20:01.172 | 30.00th=[ 111], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 122], 00:20:01.172 | 70.00th=[ 124], 80.00th=[ 127], 90.00th=[ 133], 95.00th=[ 135], 00:20:01.172 | 99.00th=[ 153], 99.50th=[ 167], 99.90th=[ 273], 99.95th=[ 338], 00:20:01.172 | 99.99th=[ 343] 00:20:01.172 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:20:01.172 slat (nsec): min=10497, max=64822, avg=11218.08, stdev=1765.94 00:20:01.172 clat (usec): min=72, max=864, avg=113.21, stdev=26.78 00:20:01.172 lat (usec): min=82, max=902, avg=124.43, stdev=27.67 00:20:01.172 clat percentiles (usec): 00:20:01.172 | 1.00th=[ 79], 5.00th=[ 84], 10.00th=[ 87], 20.00th=[ 95], 00:20:01.172 | 30.00th=[ 110], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 120], 00:20:01.172 | 70.00th=[ 122], 80.00th=[ 125], 90.00th=[ 129], 95.00th=[ 133], 00:20:01.172 | 99.00th=[ 151], 99.50th=[ 161], 99.90th=[ 457], 99.95th=[ 668], 00:20:01.172 | 99.99th=[ 865] 00:20:01.172 bw ( KiB/s): min=17168, max=17168, per=25.12%, avg=17168.00, stdev= 0.00, samples=1 00:20:01.172 iops : min= 4292, max= 4292, avg=4292.00, stdev= 0.00, samples=1 00:20:01.172 lat (usec) : 100=23.07%, 250=76.76%, 500=0.11%, 750=0.04%, 1000=0.01% 00:20:01.172 cpu : usr=6.20%, sys=10.30%, ctx=7918, majf=0, minf=1 00:20:01.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.172 issued rwts: total=3822,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:01.172 job3: (groupid=0, jobs=1): err= 0: pid=2388905: Sun Jul 21 11:43:30 2024 00:20:01.172 read: IOPS=3796, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1001msec) 00:20:01.172 slat (nsec): min=4413, max=67191, avg=10592.24, stdev=3478.10 00:20:01.172 clat (usec): min=72, max=761, avg=113.55, stdev=21.67 00:20:01.172 lat (usec): min=77, max=787, avg=124.14, stdev=23.05 00:20:01.172 clat percentiles (usec): 00:20:01.172 | 1.00th=[ 77], 5.00th=[ 82], 10.00th=[ 87], 20.00th=[ 103], 00:20:01.172 | 30.00th=[ 112], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 119], 00:20:01.172 | 70.00th=[ 121], 80.00th=[ 124], 90.00th=[ 128], 95.00th=[ 133], 00:20:01.172 | 99.00th=[ 147], 99.50th=[ 169], 99.90th=[ 293], 99.95th=[ 506], 00:20:01.172 | 99.99th=[ 758] 00:20:01.172 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:20:01.172 slat (nsec): min=7124, max=67757, avg=13453.75, stdev=3749.38 00:20:01.172 clat (usec): min=68, max=952, avg=110.67, stdev=24.53 00:20:01.172 lat (usec): min=78, max=965, avg=124.12, stdev=25.35 00:20:01.172 clat percentiles (usec): 00:20:01.172 | 1.00th=[ 74], 5.00th=[ 79], 10.00th=[ 82], 20.00th=[ 95], 00:20:01.172 | 30.00th=[ 109], 40.00th=[ 113], 50.00th=[ 115], 60.00th=[ 118], 00:20:01.172 | 70.00th=[ 120], 80.00th=[ 122], 90.00th=[ 126], 95.00th=[ 130], 00:20:01.172 | 99.00th=[ 147], 99.50th=[ 157], 99.90th=[ 229], 99.95th=[ 330], 00:20:01.172 | 99.99th=[ 955] 00:20:01.172 bw ( KiB/s): min=16351, max=16351, per=23.92%, avg=16351.00, stdev= 0.00, samples=1 00:20:01.172 iops : min= 4087, max= 4087, avg=4087.00, stdev= 0.00, samples=1 00:20:01.172 lat (usec) : 100=20.55%, 250=79.29%, 500=0.10%, 750=0.01%, 1000=0.04% 00:20:01.172 cpu : usr=6.80%, sys=11.10%, ctx=7897, majf=0, minf=1 00:20:01.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.172 issued rwts: total=3800,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:01.172 00:20:01.172 Run status group 0 (all jobs): 00:20:01.172 READ: bw=63.5MiB/s (66.6MB/s), 14.8MiB/s-18.0MiB/s (15.5MB/s-18.9MB/s), io=63.6MiB (66.7MB), run=1001-1001msec 00:20:01.172 WRITE: bw=66.8MiB/s (70.0MB/s), 16.0MiB/s-18.8MiB/s (16.8MB/s-19.7MB/s), io=66.8MiB (70.1MB), run=1001-1001msec 00:20:01.172 00:20:01.172 Disk stats (read/write): 00:20:01.172 nvme0n1: ios=3381/3584, merge=0/0, ticks=333/353, in_queue=686, util=84.07% 00:20:01.172 nvme0n2: ios=3584/4047, merge=0/0, ticks=326/357, in_queue=683, util=85.19% 00:20:01.172 nvme0n3: ios=3113/3584, merge=0/0, ticks=324/351, in_queue=675, util=88.44% 00:20:01.172 nvme0n4: ios=3072/3185, merge=0/0, ticks=336/340, in_queue=676, util=89.48% 00:20:01.172 11:43:30 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:01.172 [global] 00:20:01.172 thread=1 00:20:01.172 invalidate=1 00:20:01.172 rw=randwrite 00:20:01.172 time_based=1 00:20:01.172 runtime=1 00:20:01.172 ioengine=libaio 00:20:01.172 direct=1 00:20:01.172 bs=4096 00:20:01.172 iodepth=1 00:20:01.172 norandommap=0 00:20:01.172 numjobs=1 00:20:01.172 00:20:01.172 verify_dump=1 00:20:01.172 verify_backlog=512 00:20:01.172 verify_state_save=0 00:20:01.172 do_verify=1 00:20:01.172 verify=crc32c-intel 00:20:01.172 [job0] 00:20:01.172 filename=/dev/nvme0n1 00:20:01.172 [job1] 00:20:01.172 filename=/dev/nvme0n2 00:20:01.172 [job2] 00:20:01.172 filename=/dev/nvme0n3 00:20:01.172 [job3] 00:20:01.172 filename=/dev/nvme0n4 00:20:01.172 Could not set queue depth (nvme0n1) 00:20:01.172 Could not set queue depth (nvme0n2) 00:20:01.172 Could not set queue depth (nvme0n3) 00:20:01.172 Could not set queue depth (nvme0n4) 00:20:01.434 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:01.434 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:01.434 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:01.434 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:01.434 fio-3.35 00:20:01.434 Starting 4 threads 00:20:02.811 00:20:02.811 job0: (groupid=0, jobs=1): err= 0: pid=2389286: Sun Jul 21 11:43:31 2024 00:20:02.811 read: IOPS=4753, BW=18.6MiB/s (19.5MB/s)(18.6MiB/1001msec) 00:20:02.811 slat (nsec): min=8192, max=35012, avg=9980.05, stdev=2107.67 00:20:02.811 clat (usec): min=64, max=158, avg=89.35, stdev=15.18 00:20:02.811 lat (usec): min=74, max=167, avg=99.33, stdev=15.18 00:20:02.811 clat percentiles (usec): 00:20:02.811 | 1.00th=[ 71], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 78], 00:20:02.811 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 87], 00:20:02.811 | 70.00th=[ 92], 80.00th=[ 108], 90.00th=[ 116], 95.00th=[ 120], 00:20:02.811 | 99.00th=[ 126], 99.50th=[ 129], 99.90th=[ 149], 99.95th=[ 153], 00:20:02.811 | 99.99th=[ 159] 00:20:02.812 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:20:02.812 slat (nsec): min=5899, max=46301, avg=12438.64, stdev=2647.35 00:20:02.812 clat (usec): min=54, max=327, avg=84.84, stdev=14.55 00:20:02.812 lat (usec): min=67, max=338, avg=97.28, stdev=14.51 00:20:02.812 clat percentiles (usec): 00:20:02.812 | 1.00th=[ 67], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 75], 00:20:02.812 | 30.00th=[ 76], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 83], 00:20:02.812 | 70.00th=[ 87], 80.00th=[ 99], 90.00th=[ 110], 95.00th=[ 114], 00:20:02.812 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 141], 99.95th=[ 157], 00:20:02.812 | 99.99th=[ 326] 00:20:02.812 bw ( KiB/s): min=22352, max=22352, per=30.65%, avg=22352.00, stdev= 0.00, samples=1 00:20:02.812 iops : min= 5588, max= 5588, avg=5588.00, stdev= 0.00, samples=1 00:20:02.812 lat (usec) : 100=78.57%, 250=21.42%, 500=0.01% 00:20:02.812 cpu : usr=8.50%, sys=13.90%, ctx=9878, majf=0, minf=1 00:20:02.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:02.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.812 issued rwts: total=4758,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:02.812 job1: (groupid=0, jobs=1): err= 0: pid=2389305: Sun Jul 21 11:43:31 2024 00:20:02.812 read: IOPS=3793, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1001msec) 00:20:02.812 slat (nsec): min=8075, max=33088, avg=9028.62, stdev=908.77 00:20:02.812 clat (usec): min=69, max=187, avg=118.57, stdev=11.23 00:20:02.812 lat (usec): min=78, max=196, avg=127.60, stdev=11.25 00:20:02.812 clat percentiles (usec): 00:20:02.812 | 1.00th=[ 86], 5.00th=[ 103], 10.00th=[ 108], 20.00th=[ 112], 00:20:02.812 | 30.00th=[ 114], 40.00th=[ 117], 50.00th=[ 119], 60.00th=[ 121], 00:20:02.812 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 131], 95.00th=[ 137], 00:20:02.812 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 178], 99.95th=[ 180], 00:20:02.812 | 99.99th=[ 188] 00:20:02.812 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:20:02.812 slat (nsec): min=9988, max=37728, avg=10690.47, stdev=1041.53 00:20:02.812 clat (usec): min=63, max=172, avg=111.08, stdev=11.95 00:20:02.812 lat (usec): min=73, max=183, avg=121.77, stdev=11.91 00:20:02.812 clat percentiles (usec): 00:20:02.812 | 1.00th=[ 78], 5.00th=[ 94], 10.00th=[ 99], 20.00th=[ 103], 00:20:02.812 | 30.00th=[ 106], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 114], 00:20:02.812 | 70.00th=[ 116], 80.00th=[ 119], 90.00th=[ 124], 95.00th=[ 129], 00:20:02.812 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 172], 00:20:02.812 | 99.99th=[ 174] 00:20:02.812 bw ( KiB/s): min=16384, max=16384, per=22.47%, avg=16384.00, stdev= 0.00, samples=1 00:20:02.812 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:20:02.812 lat (usec) : 100=8.06%, 250=91.94% 00:20:02.812 cpu : usr=5.20%, sys=11.20%, ctx=7893, majf=0, minf=1 00:20:02.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:02.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.812 issued rwts: total=3797,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:02.812 job2: (groupid=0, jobs=1): err= 0: pid=2389325: Sun Jul 21 11:43:31 2024 00:20:02.812 read: IOPS=3792, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1001msec) 00:20:02.812 slat (nsec): min=8281, max=32924, avg=9132.35, stdev=781.78 00:20:02.812 clat (usec): min=74, max=180, avg=118.43, stdev= 9.95 00:20:02.812 lat (usec): min=83, max=189, avg=127.56, stdev= 9.94 00:20:02.812 clat percentiles (usec): 00:20:02.812 | 1.00th=[ 94], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 112], 00:20:02.812 | 30.00th=[ 114], 40.00th=[ 117], 50.00th=[ 119], 60.00th=[ 121], 00:20:02.812 | 70.00th=[ 123], 80.00th=[ 126], 90.00th=[ 130], 95.00th=[ 135], 00:20:02.812 | 99.00th=[ 149], 99.50th=[ 157], 99.90th=[ 172], 99.95th=[ 176], 00:20:02.812 | 99.99th=[ 182] 00:20:02.812 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:20:02.812 slat (nsec): min=10153, max=39628, avg=10981.01, stdev=1076.18 00:20:02.812 clat (usec): min=67, max=158, avg=110.76, stdev=10.11 00:20:02.812 lat (usec): min=79, max=190, avg=121.75, stdev=10.14 00:20:02.812 clat percentiles (usec): 00:20:02.812 | 1.00th=[ 84], 5.00th=[ 96], 10.00th=[ 100], 20.00th=[ 103], 00:20:02.812 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 113], 00:20:02.812 | 70.00th=[ 115], 80.00th=[ 118], 90.00th=[ 123], 95.00th=[ 127], 00:20:02.812 | 99.00th=[ 143], 99.50th=[ 149], 99.90th=[ 157], 99.95th=[ 157], 00:20:02.812 | 99.99th=[ 159] 00:20:02.812 bw ( KiB/s): min=16384, max=16384, per=22.47%, avg=16384.00, stdev= 0.00, samples=1 00:20:02.812 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:20:02.812 lat (usec) : 100=6.75%, 250=93.25% 00:20:02.812 cpu : usr=6.00%, sys=10.50%, ctx=7892, majf=0, minf=2 00:20:02.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:02.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.812 issued rwts: total=3796,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:02.812 job3: (groupid=0, jobs=1): err= 0: pid=2389331: Sun Jul 21 11:43:31 2024 00:20:02.812 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:20:02.812 slat (nsec): min=8253, max=19528, avg=9120.69, stdev=748.30 00:20:02.812 clat (usec): min=72, max=147, avg=94.43, stdev=12.66 00:20:02.812 lat (usec): min=81, max=156, avg=103.55, stdev=12.62 00:20:02.812 clat percentiles (usec): 00:20:02.812 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 82], 20.00th=[ 85], 00:20:02.812 | 30.00th=[ 87], 40.00th=[ 88], 50.00th=[ 91], 60.00th=[ 93], 00:20:02.812 | 70.00th=[ 98], 80.00th=[ 108], 90.00th=[ 116], 95.00th=[ 120], 00:20:02.812 | 99.00th=[ 127], 99.50th=[ 129], 99.90th=[ 139], 99.95th=[ 147], 00:20:02.812 | 99.99th=[ 149] 00:20:02.812 write: IOPS=4930, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1001msec); 0 zone resets 00:20:02.812 slat (nsec): min=10148, max=40567, avg=10837.38, stdev=993.51 00:20:02.812 clat (usec): min=69, max=149, avg=91.31, stdev=13.34 00:20:02.812 lat (usec): min=79, max=185, avg=102.15, stdev=13.38 00:20:02.812 clat percentiles (usec): 00:20:02.812 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 81], 00:20:02.812 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 90], 00:20:02.812 | 70.00th=[ 94], 80.00th=[ 106], 90.00th=[ 114], 95.00th=[ 118], 00:20:02.812 | 99.00th=[ 124], 99.50th=[ 128], 99.90th=[ 145], 99.95th=[ 147], 00:20:02.812 | 99.99th=[ 151] 00:20:02.812 bw ( KiB/s): min=20480, max=20480, per=28.09%, avg=20480.00, stdev= 0.00, samples=1 00:20:02.812 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:20:02.812 lat (usec) : 100=74.35%, 250=25.65% 00:20:02.812 cpu : usr=5.20%, sys=14.40%, ctx=9543, majf=0, minf=1 00:20:02.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:02.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.812 issued rwts: total=4608,4935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:02.812 00:20:02.812 Run status group 0 (all jobs): 00:20:02.812 READ: bw=66.2MiB/s (69.4MB/s), 14.8MiB/s-18.6MiB/s (15.5MB/s-19.5MB/s), io=66.2MiB (69.5MB), run=1001-1001msec 00:20:02.812 WRITE: bw=71.2MiB/s (74.7MB/s), 16.0MiB/s-20.0MiB/s (16.8MB/s-20.9MB/s), io=71.3MiB (74.7MB), run=1001-1001msec 00:20:02.812 00:20:02.812 Disk stats (read/write): 00:20:02.812 nvme0n1: ios=4145/4423, merge=0/0, ticks=327/318, in_queue=645, util=83.87% 00:20:02.812 nvme0n2: ios=3072/3455, merge=0/0, ticks=347/357, in_queue=704, util=85.07% 00:20:02.812 nvme0n3: ios=3072/3454, merge=0/0, ticks=346/356, in_queue=702, util=88.42% 00:20:02.812 nvme0n4: ios=4090/4096, merge=0/0, ticks=345/319, in_queue=664, util=89.56% 00:20:02.812 11:43:31 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:02.812 [global] 00:20:02.812 thread=1 00:20:02.812 invalidate=1 00:20:02.812 rw=write 00:20:02.812 time_based=1 00:20:02.812 runtime=1 00:20:02.812 ioengine=libaio 00:20:02.812 direct=1 00:20:02.812 bs=4096 00:20:02.812 iodepth=128 00:20:02.812 norandommap=0 00:20:02.812 numjobs=1 00:20:02.812 00:20:02.812 verify_dump=1 00:20:02.812 verify_backlog=512 00:20:02.812 verify_state_save=0 00:20:02.812 do_verify=1 00:20:02.812 verify=crc32c-intel 00:20:02.812 [job0] 00:20:02.812 filename=/dev/nvme0n1 00:20:02.812 [job1] 00:20:02.812 filename=/dev/nvme0n2 00:20:02.812 [job2] 00:20:02.812 filename=/dev/nvme0n3 00:20:02.812 [job3] 00:20:02.812 filename=/dev/nvme0n4 00:20:02.812 Could not set queue depth (nvme0n1) 00:20:02.812 Could not set queue depth (nvme0n2) 00:20:02.812 Could not set queue depth (nvme0n3) 00:20:02.812 Could not set queue depth (nvme0n4) 00:20:03.069 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:03.069 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:03.069 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:03.069 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:03.069 fio-3.35 00:20:03.069 Starting 4 threads 00:20:04.444 00:20:04.444 job0: (groupid=0, jobs=1): err= 0: pid=2389730: Sun Jul 21 11:43:33 2024 00:20:04.444 read: IOPS=7728, BW=30.2MiB/s (31.7MB/s)(30.2MiB/1002msec) 00:20:04.444 slat (nsec): min=1970, max=2472.1k, avg=59435.25, stdev=226833.86 00:20:04.444 clat (usec): min=1393, max=13541, avg=7998.43, stdev=2331.08 00:20:04.444 lat (usec): min=3123, max=14124, avg=8057.87, stdev=2342.66 00:20:04.444 clat percentiles (usec): 00:20:04.444 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 6521], 00:20:04.444 | 30.00th=[ 6652], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 7046], 00:20:04.444 | 70.00th=[ 7308], 80.00th=[11600], 90.00th=[12125], 95.00th=[12911], 00:20:04.444 | 99.00th=[13304], 99.50th=[13304], 99.90th=[13435], 99.95th=[13566], 00:20:04.444 | 99.99th=[13566] 00:20:04.444 write: IOPS=8175, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1002msec); 0 zone resets 00:20:04.444 slat (usec): min=2, max=2616, avg=59.12, stdev=225.35 00:20:04.444 clat (usec): min=5508, max=13582, avg=7931.01, stdev=2430.19 00:20:04.444 lat (usec): min=5584, max=14508, avg=7990.12, stdev=2444.11 00:20:04.444 clat percentiles (usec): 00:20:04.444 | 1.00th=[ 5800], 5.00th=[ 5932], 10.00th=[ 6063], 20.00th=[ 6194], 00:20:04.444 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6652], 60.00th=[ 6783], 00:20:04.444 | 70.00th=[ 7242], 80.00th=[11207], 90.00th=[11731], 95.00th=[12911], 00:20:04.444 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13435], 99.95th=[13566], 00:20:04.444 | 99.99th=[13566] 00:20:04.444 bw ( KiB/s): min=32264, max=32768, per=30.73%, avg=32516.00, stdev=356.38, samples=2 00:20:04.444 iops : min= 8066, max= 8192, avg=8129.00, stdev=89.10, samples=2 00:20:04.444 lat (msec) : 2=0.01%, 4=0.10%, 10=75.06%, 20=24.84% 00:20:04.444 cpu : usr=7.29%, sys=8.59%, ctx=1400, majf=0, minf=1 00:20:04.444 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:04.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:04.444 issued rwts: total=7744,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:04.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:04.444 job1: (groupid=0, jobs=1): err= 0: pid=2389743: Sun Jul 21 11:43:33 2024 00:20:04.445 read: IOPS=6945, BW=27.1MiB/s (28.4MB/s)(27.3MiB/1005msec) 00:20:04.445 slat (usec): min=2, max=1986, avg=68.23, stdev=233.82 00:20:04.445 clat (usec): min=2362, max=15620, avg=8988.10, stdev=2477.12 00:20:04.445 lat (usec): min=2370, max=15629, avg=9056.34, stdev=2493.31 00:20:04.445 clat percentiles (usec): 00:20:04.445 | 1.00th=[ 4752], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 6980], 00:20:04.445 | 30.00th=[ 7308], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 8848], 00:20:04.445 | 70.00th=[ 8979], 80.00th=[ 9503], 90.00th=[13173], 95.00th=[14877], 00:20:04.445 | 99.00th=[15401], 99.50th=[15401], 99.90th=[15533], 99.95th=[15533], 00:20:04.445 | 99.99th=[15664] 00:20:04.445 write: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec); 0 zone resets 00:20:04.445 slat (usec): min=2, max=2076, avg=68.03, stdev=227.89 00:20:04.445 clat (usec): min=5677, max=15267, avg=8959.47, stdev=2461.15 00:20:04.445 lat (usec): min=5688, max=15270, avg=9027.50, stdev=2477.56 00:20:04.445 clat percentiles (usec): 00:20:04.445 | 1.00th=[ 5997], 5.00th=[ 6521], 10.00th=[ 6652], 20.00th=[ 6849], 00:20:04.445 | 30.00th=[ 7504], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8586], 00:20:04.445 | 70.00th=[ 8717], 80.00th=[11863], 90.00th=[13304], 95.00th=[14353], 00:20:04.445 | 99.00th=[14746], 99.50th=[15008], 99.90th=[15139], 99.95th=[15139], 00:20:04.445 | 99.99th=[15270] 00:20:04.445 bw ( KiB/s): min=24576, max=32768, per=27.10%, avg=28672.00, stdev=5792.62, samples=2 00:20:04.445 iops : min= 6144, max= 8192, avg=7168.00, stdev=1448.15, samples=2 00:20:04.445 lat (msec) : 4=0.33%, 10=79.79%, 20=19.89% 00:20:04.445 cpu : usr=4.18%, sys=7.87%, ctx=1215, majf=0, minf=1 00:20:04.445 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:04.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:04.445 issued rwts: total=6980,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:04.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:04.445 job2: (groupid=0, jobs=1): err= 0: pid=2389761: Sun Jul 21 11:43:33 2024 00:20:04.445 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:20:04.445 slat (usec): min=2, max=1931, avg=69.89, stdev=237.19 00:20:04.445 clat (usec): min=6647, max=15566, avg=9226.84, stdev=2171.75 00:20:04.445 lat (usec): min=6654, max=15574, avg=9296.74, stdev=2186.95 00:20:04.445 clat percentiles (usec): 00:20:04.445 | 1.00th=[ 7111], 5.00th=[ 7504], 10.00th=[ 7635], 20.00th=[ 7832], 00:20:04.445 | 30.00th=[ 7963], 40.00th=[ 8094], 50.00th=[ 8225], 60.00th=[ 8455], 00:20:04.445 | 70.00th=[ 8848], 80.00th=[11600], 90.00th=[11994], 95.00th=[14877], 00:20:04.445 | 99.00th=[15401], 99.50th=[15533], 99.90th=[15533], 99.95th=[15533], 00:20:04.445 | 99.99th=[15533] 00:20:04.445 write: IOPS=7114, BW=27.8MiB/s (29.1MB/s)(27.8MiB/1002msec); 0 zone resets 00:20:04.445 slat (usec): min=2, max=1863, avg=70.41, stdev=239.14 00:20:04.445 clat (usec): min=1356, max=15266, avg=9142.01, stdev=2269.08 00:20:04.445 lat (usec): min=3043, max=15269, avg=9212.41, stdev=2280.97 00:20:04.445 clat percentiles (usec): 00:20:04.445 | 1.00th=[ 6783], 5.00th=[ 7111], 10.00th=[ 7373], 20.00th=[ 7504], 00:20:04.445 | 30.00th=[ 7701], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8356], 00:20:04.445 | 70.00th=[10159], 80.00th=[11207], 90.00th=[13042], 95.00th=[14353], 00:20:04.445 | 99.00th=[14746], 99.50th=[15008], 99.90th=[15270], 99.95th=[15270], 00:20:04.445 | 99.99th=[15270] 00:20:04.445 bw ( KiB/s): min=27344, max=28672, per=26.47%, avg=28008.00, stdev=939.04, samples=2 00:20:04.445 iops : min= 6836, max= 7168, avg=7002.00, stdev=234.76, samples=2 00:20:04.445 lat (msec) : 2=0.01%, 4=0.12%, 10=73.12%, 20=26.75% 00:20:04.445 cpu : usr=4.40%, sys=7.29%, ctx=1176, majf=0, minf=1 00:20:04.445 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:04.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:04.445 issued rwts: total=6656,7129,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:04.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:04.445 job3: (groupid=0, jobs=1): err= 0: pid=2389768: Sun Jul 21 11:43:33 2024 00:20:04.445 read: IOPS=3595, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1005msec) 00:20:04.445 slat (usec): min=2, max=4483, avg=126.58, stdev=467.04 00:20:04.445 clat (usec): min=4132, max=38769, avg=16737.25, stdev=11258.00 00:20:04.445 lat (usec): min=5758, max=38783, avg=16863.83, stdev=11339.05 00:20:04.445 clat percentiles (usec): 00:20:04.445 | 1.00th=[ 7308], 5.00th=[ 7570], 10.00th=[ 7701], 20.00th=[ 7898], 00:20:04.445 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:20:04.445 | 70.00th=[27132], 80.00th=[32637], 90.00th=[34341], 95.00th=[34866], 00:20:04.445 | 99.00th=[35390], 99.50th=[36439], 99.90th=[38536], 99.95th=[38536], 00:20:04.445 | 99.99th=[38536] 00:20:04.445 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:20:04.445 slat (usec): min=2, max=4576, avg=127.29, stdev=479.79 00:20:04.445 clat (usec): min=6790, max=38329, avg=16286.50, stdev=11285.73 00:20:04.445 lat (usec): min=6799, max=38795, avg=16413.79, stdev=11373.31 00:20:04.445 clat percentiles (usec): 00:20:04.445 | 1.00th=[ 6915], 5.00th=[ 7111], 10.00th=[ 7242], 20.00th=[ 7504], 00:20:04.445 | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8225], 60.00th=[ 8586], 00:20:04.445 | 70.00th=[26870], 80.00th=[32113], 90.00th=[33424], 95.00th=[34341], 00:20:04.445 | 99.00th=[35390], 99.50th=[35390], 99.90th=[37487], 99.95th=[38011], 00:20:04.445 | 99.99th=[38536] 00:20:04.445 bw ( KiB/s): min= 8600, max=23384, per=15.11%, avg=15992.00, stdev=10453.87, samples=2 00:20:04.445 iops : min= 2150, max= 5846, avg=3998.00, stdev=2613.47, samples=2 00:20:04.445 lat (msec) : 10=62.16%, 20=1.17%, 50=36.67% 00:20:04.445 cpu : usr=2.89%, sys=4.78%, ctx=721, majf=0, minf=1 00:20:04.445 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:04.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:04.445 issued rwts: total=3613,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:04.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:04.445 00:20:04.445 Run status group 0 (all jobs): 00:20:04.445 READ: bw=97.1MiB/s (102MB/s), 14.0MiB/s-30.2MiB/s (14.7MB/s-31.7MB/s), io=97.6MiB (102MB), run=1002-1005msec 00:20:04.445 WRITE: bw=103MiB/s (108MB/s), 15.9MiB/s-31.9MiB/s (16.7MB/s-33.5MB/s), io=104MiB (109MB), run=1002-1005msec 00:20:04.445 00:20:04.445 Disk stats (read/write): 00:20:04.445 nvme0n1: ios=6944/7168, merge=0/0, ticks=12454/12726, in_queue=25180, util=84.17% 00:20:04.445 nvme0n2: ios=5276/5632, merge=0/0, ticks=36989/37877, in_queue=74866, util=85.19% 00:20:04.445 nvme0n3: ios=5767/6144, merge=0/0, ticks=12220/13326, in_queue=25546, util=88.35% 00:20:04.445 nvme0n4: ios=3301/3584, merge=0/0, ticks=12597/13312, in_queue=25909, util=89.38% 00:20:04.445 11:43:33 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:04.445 [global] 00:20:04.445 thread=1 00:20:04.445 invalidate=1 00:20:04.445 rw=randwrite 00:20:04.445 time_based=1 00:20:04.445 runtime=1 00:20:04.445 ioengine=libaio 00:20:04.445 direct=1 00:20:04.445 bs=4096 00:20:04.445 iodepth=128 00:20:04.445 norandommap=0 00:20:04.445 numjobs=1 00:20:04.445 00:20:04.445 verify_dump=1 00:20:04.445 verify_backlog=512 00:20:04.445 verify_state_save=0 00:20:04.445 do_verify=1 00:20:04.445 verify=crc32c-intel 00:20:04.445 [job0] 00:20:04.445 filename=/dev/nvme0n1 00:20:04.445 [job1] 00:20:04.445 filename=/dev/nvme0n2 00:20:04.445 [job2] 00:20:04.445 filename=/dev/nvme0n3 00:20:04.445 [job3] 00:20:04.445 filename=/dev/nvme0n4 00:20:04.445 Could not set queue depth (nvme0n1) 00:20:04.445 Could not set queue depth (nvme0n2) 00:20:04.445 Could not set queue depth (nvme0n3) 00:20:04.445 Could not set queue depth (nvme0n4) 00:20:04.701 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:04.701 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:04.701 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:04.701 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:04.701 fio-3.35 00:20:04.701 Starting 4 threads 00:20:06.081 00:20:06.081 job0: (groupid=0, jobs=1): err= 0: pid=2390156: Sun Jul 21 11:43:35 2024 00:20:06.081 read: IOPS=8652, BW=33.8MiB/s (35.4MB/s)(34.0MiB/1006msec) 00:20:06.081 slat (usec): min=2, max=3105, avg=55.41, stdev=208.46 00:20:06.081 clat (usec): min=5627, max=11635, avg=7381.90, stdev=1031.79 00:20:06.081 lat (usec): min=5636, max=11722, avg=7437.31, stdev=1040.65 00:20:06.081 clat percentiles (usec): 00:20:06.081 | 1.00th=[ 5866], 5.00th=[ 6259], 10.00th=[ 6521], 20.00th=[ 6587], 00:20:06.081 | 30.00th=[ 6718], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7046], 00:20:06.081 | 70.00th=[ 7635], 80.00th=[ 8717], 90.00th=[ 8979], 95.00th=[ 9110], 00:20:06.081 | 99.00th=[ 9634], 99.50th=[10290], 99.90th=[10814], 99.95th=[11469], 00:20:06.081 | 99.99th=[11600] 00:20:06.081 write: IOPS=8899, BW=34.8MiB/s (36.5MB/s)(35.0MiB/1006msec); 0 zone resets 00:20:06.081 slat (usec): min=2, max=2594, avg=53.74, stdev=197.18 00:20:06.081 clat (usec): min=1330, max=13363, avg=7080.67, stdev=1189.43 00:20:06.081 lat (usec): min=1343, max=15950, avg=7134.41, stdev=1198.63 00:20:06.081 clat percentiles (usec): 00:20:06.081 | 1.00th=[ 4948], 5.00th=[ 5735], 10.00th=[ 6194], 20.00th=[ 6259], 00:20:06.081 | 30.00th=[ 6390], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718], 00:20:06.081 | 70.00th=[ 7701], 80.00th=[ 8356], 90.00th=[ 8717], 95.00th=[ 8848], 00:20:06.081 | 99.00th=[10159], 99.50th=[11731], 99.90th=[13304], 99.95th=[13304], 00:20:06.081 | 99.99th=[13304] 00:20:06.081 bw ( KiB/s): min=30144, max=40464, per=31.97%, avg=35304.00, stdev=7297.34, samples=2 00:20:06.081 iops : min= 7536, max=10116, avg=8826.00, stdev=1824.34, samples=2 00:20:06.081 lat (msec) : 2=0.07%, 4=0.18%, 10=98.88%, 20=0.87% 00:20:06.081 cpu : usr=3.98%, sys=8.96%, ctx=1163, majf=0, minf=1 00:20:06.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:06.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:06.081 issued rwts: total=8704,8953,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:06.081 job1: (groupid=0, jobs=1): err= 0: pid=2390172: Sun Jul 21 11:43:35 2024 00:20:06.081 read: IOPS=9679, BW=37.8MiB/s (39.6MB/s)(38.0MiB/1005msec) 00:20:06.081 slat (usec): min=2, max=1108, avg=50.56, stdev=182.37 00:20:06.081 clat (usec): min=5288, max=11516, avg=6771.37, stdev=452.82 00:20:06.081 lat (usec): min=5291, max=11519, avg=6821.93, stdev=444.07 00:20:06.081 clat percentiles (usec): 00:20:06.081 | 1.00th=[ 5735], 5.00th=[ 5997], 10.00th=[ 6325], 20.00th=[ 6587], 00:20:06.081 | 30.00th=[ 6652], 40.00th=[ 6718], 50.00th=[ 6783], 60.00th=[ 6849], 00:20:06.081 | 70.00th=[ 6915], 80.00th=[ 6980], 90.00th=[ 7111], 95.00th=[ 7242], 00:20:06.081 | 99.00th=[ 7832], 99.50th=[ 8979], 99.90th=[11469], 99.95th=[11469], 00:20:06.081 | 99.99th=[11469] 00:20:06.081 write: IOPS=9681, BW=37.8MiB/s (39.7MB/s)(38.0MiB/1005msec); 0 zone resets 00:20:06.081 slat (usec): min=2, max=1208, avg=48.19, stdev=171.95 00:20:06.081 clat (usec): min=1298, max=7820, avg=6338.09, stdev=454.25 00:20:06.081 lat (usec): min=1309, max=7835, avg=6386.28, stdev=450.66 00:20:06.081 clat percentiles (usec): 00:20:06.081 | 1.00th=[ 5080], 5.00th=[ 5604], 10.00th=[ 5932], 20.00th=[ 6194], 00:20:06.081 | 30.00th=[ 6259], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6456], 00:20:06.081 | 70.00th=[ 6521], 80.00th=[ 6587], 90.00th=[ 6718], 95.00th=[ 6849], 00:20:06.081 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[ 7439], 99.95th=[ 7635], 00:20:06.081 | 99.99th=[ 7832] 00:20:06.081 bw ( KiB/s): min=37464, max=40360, per=35.23%, avg=38912.00, stdev=2047.78, samples=2 00:20:06.081 iops : min= 9366, max=10090, avg=9728.00, stdev=511.95, samples=2 00:20:06.081 lat (msec) : 2=0.12%, 4=0.16%, 10=99.55%, 20=0.16% 00:20:06.081 cpu : usr=5.58%, sys=8.07%, ctx=1251, majf=0, minf=1 00:20:06.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:20:06.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:06.081 issued rwts: total=9728,9730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:06.081 job2: (groupid=0, jobs=1): err= 0: pid=2390198: Sun Jul 21 11:43:35 2024 00:20:06.081 read: IOPS=5404, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1002msec) 00:20:06.081 slat (usec): min=2, max=1472, avg=91.99, stdev=277.62 00:20:06.081 clat (usec): min=962, max=17953, avg=11838.78, stdev=4098.75 00:20:06.081 lat (usec): min=1958, max=17963, avg=11930.77, stdev=4125.84 00:20:06.081 clat percentiles (usec): 00:20:06.081 | 1.00th=[ 5407], 5.00th=[ 7504], 10.00th=[ 7832], 20.00th=[ 8094], 00:20:06.081 | 30.00th=[ 8225], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[15533], 00:20:06.081 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16712], 95.00th=[16712], 00:20:06.081 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17695], 99.95th=[17957], 00:20:06.081 | 99.99th=[17957] 00:20:06.081 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:20:06.081 slat (usec): min=2, max=1877, avg=84.58, stdev=257.01 00:20:06.081 clat (usec): min=6114, max=17607, avg=11097.16, stdev=3785.84 00:20:06.081 lat (usec): min=7088, max=17616, avg=11181.74, stdev=3811.28 00:20:06.081 clat percentiles (usec): 00:20:06.081 | 1.00th=[ 6652], 5.00th=[ 7308], 10.00th=[ 7439], 20.00th=[ 7570], 00:20:06.081 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 8094], 60.00th=[14484], 00:20:06.081 | 70.00th=[14877], 80.00th=[15401], 90.00th=[15664], 95.00th=[16057], 00:20:06.081 | 99.00th=[16450], 99.50th=[16712], 99.90th=[17433], 99.95th=[17433], 00:20:06.081 | 99.99th=[17695] 00:20:06.081 bw ( KiB/s): min=16384, max=28672, per=20.40%, avg=22528.00, stdev=8688.93, samples=2 00:20:06.081 iops : min= 4096, max= 7168, avg=5632.00, stdev=2172.23, samples=2 00:20:06.081 lat (usec) : 1000=0.01% 00:20:06.081 lat (msec) : 2=0.10%, 4=0.16%, 10=53.13%, 20=46.60% 00:20:06.081 cpu : usr=2.70%, sys=6.19%, ctx=1151, majf=0, minf=1 00:20:06.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:06.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:06.081 issued rwts: total=5415,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:06.081 job3: (groupid=0, jobs=1): err= 0: pid=2390210: Sun Jul 21 11:43:35 2024 00:20:06.081 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:20:06.081 slat (usec): min=2, max=7138, avg=148.82, stdev=555.91 00:20:06.081 clat (usec): min=13363, max=36388, avg=19433.47, stdev=6817.95 00:20:06.081 lat (usec): min=14063, max=36400, avg=19582.29, stdev=6854.01 00:20:06.081 clat percentiles (usec): 00:20:06.081 | 1.00th=[14746], 5.00th=[15270], 10.00th=[15533], 20.00th=[15795], 00:20:06.081 | 30.00th=[16057], 40.00th=[16319], 50.00th=[16450], 60.00th=[16712], 00:20:06.081 | 70.00th=[16909], 80.00th=[17695], 90.00th=[34341], 95.00th=[34866], 00:20:06.081 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:20:06.081 | 99.99th=[36439] 00:20:06.081 write: IOPS=3440, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1006msec); 0 zone resets 00:20:06.081 slat (usec): min=2, max=7016, avg=153.49, stdev=627.76 00:20:06.081 clat (usec): min=1543, max=36625, avg=19494.68, stdev=7938.71 00:20:06.081 lat (usec): min=6362, max=40795, avg=19648.17, stdev=7988.37 00:20:06.081 clat percentiles (usec): 00:20:06.081 | 1.00th=[11731], 5.00th=[14091], 10.00th=[14484], 20.00th=[14746], 00:20:06.081 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15533], 60.00th=[15664], 00:20:06.081 | 70.00th=[16188], 80.00th=[32900], 90.00th=[33817], 95.00th=[34341], 00:20:06.081 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:20:06.081 | 99.99th=[36439] 00:20:06.081 bw ( KiB/s): min=10280, max=16384, per=12.07%, avg=13332.00, stdev=4316.18, samples=2 00:20:06.081 iops : min= 2570, max= 4096, avg=3333.00, stdev=1079.04, samples=2 00:20:06.081 lat (msec) : 2=0.02%, 10=0.49%, 20=77.99%, 50=21.51% 00:20:06.081 cpu : usr=2.69%, sys=2.99%, ctx=922, majf=0, minf=1 00:20:06.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:20:06.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:06.081 issued rwts: total=3072,3461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:06.081 00:20:06.081 Run status group 0 (all jobs): 00:20:06.082 READ: bw=105MiB/s (110MB/s), 11.9MiB/s-37.8MiB/s (12.5MB/s-39.6MB/s), io=105MiB (110MB), run=1002-1006msec 00:20:06.082 WRITE: bw=108MiB/s (113MB/s), 13.4MiB/s-37.8MiB/s (14.1MB/s-39.7MB/s), io=109MiB (114MB), run=1002-1006msec 00:20:06.082 00:20:06.082 Disk stats (read/write): 00:20:06.082 nvme0n1: ios=7217/7646, merge=0/0, ticks=49609/51033, in_queue=100642, util=82.55% 00:20:06.082 nvme0n2: ios=7680/8005, merge=0/0, ticks=50962/49890, in_queue=100852, util=83.47% 00:20:06.082 nvme0n3: ios=3900/4096, merge=0/0, ticks=16994/16536, in_queue=33530, util=87.70% 00:20:06.082 nvme0n4: ios=2725/3072, merge=0/0, ticks=15010/16891, in_queue=31901, util=89.14% 00:20:06.082 11:43:35 -- target/fio.sh@55 -- # sync 00:20:06.082 11:43:35 -- target/fio.sh@59 -- # fio_pid=2390279 00:20:06.082 11:43:35 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:06.082 11:43:35 -- target/fio.sh@61 -- # sleep 3 00:20:06.082 [global] 00:20:06.082 thread=1 00:20:06.082 invalidate=1 00:20:06.082 rw=read 00:20:06.082 time_based=1 00:20:06.082 runtime=10 00:20:06.082 ioengine=libaio 00:20:06.082 direct=1 00:20:06.082 bs=4096 00:20:06.082 iodepth=1 00:20:06.082 norandommap=1 00:20:06.082 numjobs=1 00:20:06.082 00:20:06.082 [job0] 00:20:06.082 filename=/dev/nvme0n1 00:20:06.082 [job1] 00:20:06.082 filename=/dev/nvme0n2 00:20:06.082 [job2] 00:20:06.082 filename=/dev/nvme0n3 00:20:06.082 [job3] 00:20:06.082 filename=/dev/nvme0n4 00:20:06.082 Could not set queue depth (nvme0n1) 00:20:06.082 Could not set queue depth (nvme0n2) 00:20:06.082 Could not set queue depth (nvme0n3) 00:20:06.082 Could not set queue depth (nvme0n4) 00:20:06.342 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:06.342 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:06.342 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:06.342 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:06.342 fio-3.35 00:20:06.342 Starting 4 threads 00:20:08.863 11:43:38 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:09.120 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=100507648, buflen=4096 00:20:09.120 fio: pid=2390624, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:09.120 11:43:38 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:09.120 11:43:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:09.120 11:43:38 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:09.120 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=106762240, buflen=4096 00:20:09.120 fio: pid=2390618, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:09.376 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=31727616, buflen=4096 00:20:09.376 fio: pid=2390585, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:09.376 11:43:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:09.376 11:43:38 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:09.633 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=42672128, buflen=4096 00:20:09.633 fio: pid=2390598, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:09.633 11:43:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:09.633 11:43:38 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:09.633 00:20:09.633 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2390585: Sun Jul 21 11:43:38 2024 00:20:09.633 read: IOPS=8124, BW=31.7MiB/s (33.3MB/s)(94.3MiB/2970msec) 00:20:09.633 slat (usec): min=6, max=16936, avg=10.88, stdev=166.80 00:20:09.633 clat (usec): min=47, max=20827, avg=110.69, stdev=134.55 00:20:09.633 lat (usec): min=55, max=20836, avg=121.57, stdev=214.08 00:20:09.633 clat percentiles (usec): 00:20:09.633 | 1.00th=[ 58], 5.00th=[ 73], 10.00th=[ 81], 20.00th=[ 101], 00:20:09.633 | 30.00th=[ 108], 40.00th=[ 111], 50.00th=[ 114], 60.00th=[ 117], 00:20:09.633 | 70.00th=[ 119], 80.00th=[ 122], 90.00th=[ 126], 95.00th=[ 131], 00:20:09.633 | 99.00th=[ 155], 99.50th=[ 161], 99.90th=[ 169], 99.95th=[ 176], 00:20:09.633 | 99.99th=[ 180] 00:20:09.633 bw ( KiB/s): min=31592, max=33320, per=25.08%, avg=32046.40, stdev=716.65, samples=5 00:20:09.633 iops : min= 7898, max= 8330, avg=8011.60, stdev=179.16, samples=5 00:20:09.633 lat (usec) : 50=0.09%, 100=19.03%, 250=80.87%, 500=0.01% 00:20:09.633 lat (msec) : 50=0.01% 00:20:09.633 cpu : usr=4.08%, sys=10.88%, ctx=24136, majf=0, minf=1 00:20:09.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.633 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.633 issued rwts: total=24131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:09.634 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2390598: Sun Jul 21 11:43:38 2024 00:20:09.634 read: IOPS=8431, BW=32.9MiB/s (34.5MB/s)(105MiB/3179msec) 00:20:09.634 slat (usec): min=3, max=19776, avg=11.51, stdev=189.30 00:20:09.634 clat (usec): min=43, max=19154, avg=105.13, stdev=118.56 00:20:09.634 lat (usec): min=46, max=19845, avg=116.63, stdev=223.13 00:20:09.634 clat percentiles (usec): 00:20:09.634 | 1.00th=[ 53], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 81], 00:20:09.634 | 30.00th=[ 102], 40.00th=[ 109], 50.00th=[ 113], 60.00th=[ 115], 00:20:09.634 | 70.00th=[ 118], 80.00th=[ 121], 90.00th=[ 126], 95.00th=[ 130], 00:20:09.634 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 172], 99.95th=[ 174], 00:20:09.634 | 99.99th=[ 182] 00:20:09.634 bw ( KiB/s): min=31584, max=39344, per=26.03%, avg=33260.00, stdev=3048.26, samples=6 00:20:09.634 iops : min= 7896, max= 9836, avg=8315.00, stdev=762.06, samples=6 00:20:09.634 lat (usec) : 50=0.32%, 100=27.83%, 250=71.84%, 500=0.01% 00:20:09.634 lat (msec) : 20=0.01% 00:20:09.634 cpu : usr=3.43%, sys=12.11%, ctx=26810, majf=0, minf=1 00:20:09.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.634 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.634 issued rwts: total=26803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:09.634 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2390618: Sun Jul 21 11:43:38 2024 00:20:09.634 read: IOPS=9305, BW=36.3MiB/s (38.1MB/s)(102MiB/2801msec) 00:20:09.634 slat (usec): min=8, max=8916, avg= 9.83, stdev=73.46 00:20:09.634 clat (usec): min=58, max=27401, avg=95.71, stdev=169.94 00:20:09.634 lat (usec): min=76, max=27410, avg=105.55, stdev=185.11 00:20:09.634 clat percentiles (usec): 00:20:09.634 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 82], 20.00th=[ 85], 00:20:09.634 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 91], 00:20:09.634 | 70.00th=[ 94], 80.00th=[ 101], 90.00th=[ 124], 95.00th=[ 129], 00:20:09.634 | 99.00th=[ 139], 99.50th=[ 147], 99.90th=[ 167], 99.95th=[ 176], 00:20:09.634 | 99.99th=[ 498] 00:20:09.634 bw ( KiB/s): min=30040, max=40664, per=30.04%, avg=38384.00, stdev=4667.90, samples=5 00:20:09.634 iops : min= 7510, max=10166, avg=9596.00, stdev=1166.97, samples=5 00:20:09.634 lat (usec) : 100=79.10%, 250=20.89%, 500=0.01%, 1000=0.01% 00:20:09.634 lat (msec) : 50=0.01% 00:20:09.634 cpu : usr=3.54%, sys=13.71%, ctx=26068, majf=0, minf=1 00:20:09.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.634 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.634 issued rwts: total=26066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:09.634 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2390624: Sun Jul 21 11:43:38 2024 00:20:09.634 read: IOPS=9358, BW=36.6MiB/s (38.3MB/s)(95.9MiB/2622msec) 00:20:09.634 slat (nsec): min=8120, max=43029, avg=8940.42, stdev=940.00 00:20:09.634 clat (usec): min=73, max=308, avg=95.83, stdev=15.67 00:20:09.634 lat (usec): min=82, max=317, avg=104.77, stdev=15.77 00:20:09.634 clat percentiles (usec): 00:20:09.634 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 86], 00:20:09.634 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 90], 60.00th=[ 93], 00:20:09.634 | 70.00th=[ 95], 80.00th=[ 102], 90.00th=[ 125], 95.00th=[ 130], 00:20:09.634 | 99.00th=[ 139], 99.50th=[ 147], 99.90th=[ 172], 99.95th=[ 176], 00:20:09.634 | 99.99th=[ 180] 00:20:09.634 bw ( KiB/s): min=30064, max=40416, per=29.89%, avg=38185.60, stdev=4547.62, samples=5 00:20:09.634 iops : min= 7516, max=10104, avg=9546.40, stdev=1136.90, samples=5 00:20:09.634 lat (usec) : 100=78.23%, 250=21.77%, 500=0.01% 00:20:09.634 cpu : usr=4.20%, sys=13.05%, ctx=24539, majf=0, minf=2 00:20:09.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.634 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.634 issued rwts: total=24539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:09.634 00:20:09.634 Run status group 0 (all jobs): 00:20:09.634 READ: bw=125MiB/s (131MB/s), 31.7MiB/s-36.6MiB/s (33.3MB/s-38.3MB/s), io=397MiB (416MB), run=2622-3179msec 00:20:09.634 00:20:09.634 Disk stats (read/write): 00:20:09.634 nvme0n1: ios=22534/0, merge=0/0, ticks=2394/0, in_queue=2394, util=93.12% 00:20:09.634 nvme0n2: ios=25732/0, merge=0/0, ticks=2580/0, in_queue=2580, util=93.49% 00:20:09.634 nvme0n3: ios=24588/0, merge=0/0, ticks=2144/0, in_queue=2144, util=96.03% 00:20:09.634 nvme0n4: ios=24464/0, merge=0/0, ticks=2159/0, in_queue=2159, util=96.46% 00:20:09.891 11:43:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:09.891 11:43:39 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:09.891 11:43:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:09.891 11:43:39 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:10.148 11:43:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:10.148 11:43:39 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:10.405 11:43:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:10.405 11:43:39 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:10.663 11:43:39 -- target/fio.sh@69 -- # fio_status=0 00:20:10.663 11:43:39 -- target/fio.sh@70 -- # wait 2390279 00:20:10.663 11:43:39 -- target/fio.sh@70 -- # fio_status=4 00:20:10.663 11:43:39 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:11.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:11.596 11:43:40 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:11.596 11:43:40 -- common/autotest_common.sh@1198 -- # local i=0 00:20:11.596 11:43:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:11.596 11:43:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:11.596 11:43:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:11.596 11:43:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:11.596 11:43:40 -- common/autotest_common.sh@1210 -- # return 0 00:20:11.596 11:43:40 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:11.596 11:43:40 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:11.596 nvmf hotplug test: fio failed as expected 00:20:11.596 11:43:40 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.596 11:43:40 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:11.596 11:43:40 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:11.596 11:43:40 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:11.596 11:43:40 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:11.596 11:43:40 -- target/fio.sh@91 -- # nvmftestfini 00:20:11.596 11:43:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:11.596 11:43:40 -- nvmf/common.sh@116 -- # sync 00:20:11.596 11:43:40 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:11.596 11:43:40 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:11.596 11:43:40 -- nvmf/common.sh@119 -- # set +e 00:20:11.596 11:43:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:11.596 11:43:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:11.596 rmmod nvme_rdma 00:20:11.596 rmmod nvme_fabrics 00:20:11.596 11:43:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:11.855 11:43:41 -- nvmf/common.sh@123 -- # set -e 00:20:11.855 11:43:41 -- nvmf/common.sh@124 -- # return 0 00:20:11.855 11:43:41 -- nvmf/common.sh@477 -- # '[' -n 2387381 ']' 00:20:11.855 11:43:41 -- nvmf/common.sh@478 -- # killprocess 2387381 00:20:11.855 11:43:41 -- common/autotest_common.sh@926 -- # '[' -z 2387381 ']' 00:20:11.855 11:43:41 -- common/autotest_common.sh@930 -- # kill -0 2387381 00:20:11.855 11:43:41 -- common/autotest_common.sh@931 -- # uname 00:20:11.855 11:43:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:11.855 11:43:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2387381 00:20:11.855 11:43:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:11.855 11:43:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:11.855 11:43:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2387381' 00:20:11.855 killing process with pid 2387381 00:20:11.855 11:43:41 -- common/autotest_common.sh@945 -- # kill 2387381 00:20:11.855 11:43:41 -- common/autotest_common.sh@950 -- # wait 2387381 00:20:12.114 11:43:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:12.114 11:43:41 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:12.114 00:20:12.114 real 0m28.024s 00:20:12.114 user 2m5.741s 00:20:12.114 sys 0m11.802s 00:20:12.114 11:43:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.114 11:43:41 -- common/autotest_common.sh@10 -- # set +x 00:20:12.114 ************************************ 00:20:12.114 END TEST nvmf_fio_target 00:20:12.114 ************************************ 00:20:12.114 11:43:41 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:12.114 11:43:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:12.114 11:43:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:12.114 11:43:41 -- common/autotest_common.sh@10 -- # set +x 00:20:12.114 ************************************ 00:20:12.114 START TEST nvmf_bdevio 00:20:12.114 ************************************ 00:20:12.114 11:43:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:12.114 * Looking for test storage... 00:20:12.114 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:12.114 11:43:41 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.114 11:43:41 -- nvmf/common.sh@7 -- # uname -s 00:20:12.114 11:43:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.114 11:43:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.114 11:43:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.114 11:43:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.114 11:43:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.114 11:43:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.114 11:43:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.114 11:43:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.114 11:43:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.114 11:43:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.114 11:43:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:12.114 11:43:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:12.114 11:43:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.114 11:43:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.114 11:43:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.114 11:43:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:12.114 11:43:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.114 11:43:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.114 11:43:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.115 11:43:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.115 11:43:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.115 11:43:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.115 11:43:41 -- paths/export.sh@5 -- # export PATH 00:20:12.115 11:43:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.115 11:43:41 -- nvmf/common.sh@46 -- # : 0 00:20:12.115 11:43:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:12.115 11:43:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:12.115 11:43:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:12.115 11:43:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.115 11:43:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.115 11:43:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:12.115 11:43:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:12.115 11:43:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:12.115 11:43:41 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:12.115 11:43:41 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:12.115 11:43:41 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:12.115 11:43:41 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:12.115 11:43:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.115 11:43:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:12.115 11:43:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:12.115 11:43:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:12.115 11:43:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.115 11:43:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.115 11:43:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.115 11:43:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:12.115 11:43:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:12.115 11:43:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:12.115 11:43:41 -- common/autotest_common.sh@10 -- # set +x 00:20:20.314 11:43:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:20.314 11:43:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:20.314 11:43:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:20.314 11:43:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:20.314 11:43:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:20.314 11:43:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:20.314 11:43:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:20.314 11:43:48 -- nvmf/common.sh@294 -- # net_devs=() 00:20:20.314 11:43:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:20.314 11:43:48 -- nvmf/common.sh@295 -- # e810=() 00:20:20.314 11:43:48 -- nvmf/common.sh@295 -- # local -ga e810 00:20:20.314 11:43:48 -- nvmf/common.sh@296 -- # x722=() 00:20:20.314 11:43:48 -- nvmf/common.sh@296 -- # local -ga x722 00:20:20.314 11:43:48 -- nvmf/common.sh@297 -- # mlx=() 00:20:20.314 11:43:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:20.314 11:43:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.314 11:43:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.314 11:43:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.314 11:43:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.314 11:43:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.314 11:43:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.314 11:43:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.314 11:43:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.314 11:43:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.314 11:43:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.314 11:43:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.314 11:43:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:20.314 11:43:48 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:20.314 11:43:48 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:20.314 11:43:48 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:20.314 11:43:48 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:20.314 11:43:48 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:20.314 11:43:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:20.314 11:43:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:20.314 11:43:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:20.315 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:20.315 11:43:48 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:20.315 11:43:48 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:20.315 11:43:48 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:20.315 11:43:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:20.315 11:43:48 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:20.315 11:43:48 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:20.315 11:43:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:20.315 11:43:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:20.315 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:20.315 11:43:48 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:20.315 11:43:48 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:20.315 11:43:48 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:20.315 11:43:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:20.315 11:43:48 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:20.315 11:43:48 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:20.315 11:43:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:20.315 11:43:48 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:20.315 11:43:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:20.315 11:43:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.315 11:43:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:20.315 11:43:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.315 11:43:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:20.315 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:20.315 11:43:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.315 11:43:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:20.315 11:43:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.315 11:43:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:20.315 11:43:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.315 11:43:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:20.315 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:20.315 11:43:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.315 11:43:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:20.315 11:43:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:20.315 11:43:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:20.315 11:43:48 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:20.315 11:43:48 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:20.315 11:43:48 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:20.315 11:43:48 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:20.315 11:43:48 -- nvmf/common.sh@57 -- # uname 00:20:20.315 11:43:48 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:20.315 11:43:48 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:20.315 11:43:48 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:20.315 11:43:49 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:20.315 11:43:49 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:20.315 11:43:49 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:20.315 11:43:49 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:20.315 11:43:49 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:20.315 11:43:49 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:20.315 11:43:49 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:20.315 11:43:49 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:20.315 11:43:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:20.315 11:43:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:20.315 11:43:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:20.315 11:43:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:20.315 11:43:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:20.315 11:43:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:20.315 11:43:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:20.315 11:43:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:20.315 11:43:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:20.315 11:43:49 -- nvmf/common.sh@104 -- # continue 2 00:20:20.315 11:43:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:20.315 11:43:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:20.315 11:43:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:20.315 11:43:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:20.315 11:43:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:20.315 11:43:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:20.315 11:43:49 -- nvmf/common.sh@104 -- # continue 2 00:20:20.315 11:43:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:20.315 11:43:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:20.315 11:43:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:20.315 11:43:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:20.315 11:43:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:20.315 11:43:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:20.315 11:43:49 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:20.315 11:43:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:20.315 11:43:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:20.315 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:20.315 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:20.315 altname enp217s0f0np0 00:20:20.315 altname ens818f0np0 00:20:20.315 inet 192.168.100.8/24 scope global mlx_0_0 00:20:20.315 valid_lft forever preferred_lft forever 00:20:20.315 11:43:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:20.315 11:43:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:20.315 11:43:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:20.315 11:43:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:20.315 11:43:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:20.315 11:43:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:20.315 11:43:49 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:20.315 11:43:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:20.315 11:43:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:20.315 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:20.315 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:20.315 altname enp217s0f1np1 00:20:20.315 altname ens818f1np1 00:20:20.315 inet 192.168.100.9/24 scope global mlx_0_1 00:20:20.315 valid_lft forever preferred_lft forever 00:20:20.315 11:43:49 -- nvmf/common.sh@410 -- # return 0 00:20:20.315 11:43:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:20.315 11:43:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:20.315 11:43:49 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:20.315 11:43:49 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:20.315 11:43:49 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:20.315 11:43:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:20.315 11:43:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:20.315 11:43:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:20.315 11:43:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:20.315 11:43:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:20.315 11:43:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:20.315 11:43:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:20.315 11:43:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:20.315 11:43:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:20.315 11:43:49 -- nvmf/common.sh@104 -- # continue 2 00:20:20.315 11:43:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:20.315 11:43:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:20.315 11:43:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:20.315 11:43:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:20.315 11:43:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:20.315 11:43:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:20.315 11:43:49 -- nvmf/common.sh@104 -- # continue 2 00:20:20.315 11:43:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:20.315 11:43:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:20.315 11:43:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:20.315 11:43:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:20.315 11:43:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:20.315 11:43:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:20.315 11:43:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:20.315 11:43:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:20.315 11:43:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:20.315 11:43:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:20.315 11:43:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:20.315 11:43:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:20.315 11:43:49 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:20.315 192.168.100.9' 00:20:20.315 11:43:49 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:20.315 192.168.100.9' 00:20:20.315 11:43:49 -- nvmf/common.sh@445 -- # head -n 1 00:20:20.315 11:43:49 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:20.315 11:43:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:20.315 192.168.100.9' 00:20:20.315 11:43:49 -- nvmf/common.sh@446 -- # tail -n +2 00:20:20.315 11:43:49 -- nvmf/common.sh@446 -- # head -n 1 00:20:20.315 11:43:49 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:20.315 11:43:49 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:20.315 11:43:49 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:20.315 11:43:49 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:20.315 11:43:49 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:20.315 11:43:49 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:20.315 11:43:49 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:20.315 11:43:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:20.315 11:43:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:20.315 11:43:49 -- common/autotest_common.sh@10 -- # set +x 00:20:20.315 11:43:49 -- nvmf/common.sh@469 -- # nvmfpid=2395430 00:20:20.315 11:43:49 -- nvmf/common.sh@470 -- # waitforlisten 2395430 00:20:20.315 11:43:49 -- common/autotest_common.sh@819 -- # '[' -z 2395430 ']' 00:20:20.315 11:43:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.315 11:43:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:20.315 11:43:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.315 11:43:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:20.315 11:43:49 -- common/autotest_common.sh@10 -- # set +x 00:20:20.315 11:43:49 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:20.315 [2024-07-21 11:43:49.257434] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:20.315 [2024-07-21 11:43:49.257484] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.315 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.316 [2024-07-21 11:43:49.342349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:20.316 [2024-07-21 11:43:49.380067] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:20.316 [2024-07-21 11:43:49.380175] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.316 [2024-07-21 11:43:49.380185] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.316 [2024-07-21 11:43:49.380194] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.316 [2024-07-21 11:43:49.380307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:20.316 [2024-07-21 11:43:49.380416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:20.316 [2024-07-21 11:43:49.380526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.316 [2024-07-21 11:43:49.380527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:20.880 11:43:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:20.880 11:43:50 -- common/autotest_common.sh@852 -- # return 0 00:20:20.880 11:43:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:20.880 11:43:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:20.880 11:43:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.880 11:43:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.880 11:43:50 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:20.880 11:43:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.880 11:43:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.880 [2024-07-21 11:43:50.126107] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13b3d90/0x13b8280) succeed. 00:20:20.880 [2024-07-21 11:43:50.136368] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13b5380/0x13f9910) succeed. 00:20:20.880 11:43:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.880 11:43:50 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:20.880 11:43:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.880 11:43:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.880 Malloc0 00:20:20.881 11:43:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.881 11:43:50 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:20.881 11:43:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.881 11:43:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.881 11:43:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.881 11:43:50 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:20.881 11:43:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.881 11:43:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.881 11:43:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.881 11:43:50 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:20.881 11:43:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.881 11:43:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.881 [2024-07-21 11:43:50.294821] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:20.881 11:43:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.881 11:43:50 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:20.881 11:43:50 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:20.881 11:43:50 -- nvmf/common.sh@520 -- # config=() 00:20:20.881 11:43:50 -- nvmf/common.sh@520 -- # local subsystem config 00:20:20.881 11:43:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:21.138 11:43:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:21.138 { 00:20:21.138 "params": { 00:20:21.138 "name": "Nvme$subsystem", 00:20:21.138 "trtype": "$TEST_TRANSPORT", 00:20:21.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.138 "adrfam": "ipv4", 00:20:21.138 "trsvcid": "$NVMF_PORT", 00:20:21.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.138 "hdgst": ${hdgst:-false}, 00:20:21.138 "ddgst": ${ddgst:-false} 00:20:21.138 }, 00:20:21.138 "method": "bdev_nvme_attach_controller" 00:20:21.138 } 00:20:21.138 EOF 00:20:21.138 )") 00:20:21.138 11:43:50 -- nvmf/common.sh@542 -- # cat 00:20:21.138 11:43:50 -- nvmf/common.sh@544 -- # jq . 00:20:21.138 11:43:50 -- nvmf/common.sh@545 -- # IFS=, 00:20:21.138 11:43:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:21.138 "params": { 00:20:21.138 "name": "Nvme1", 00:20:21.138 "trtype": "rdma", 00:20:21.138 "traddr": "192.168.100.8", 00:20:21.138 "adrfam": "ipv4", 00:20:21.138 "trsvcid": "4420", 00:20:21.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.138 "hdgst": false, 00:20:21.138 "ddgst": false 00:20:21.138 }, 00:20:21.138 "method": "bdev_nvme_attach_controller" 00:20:21.138 }' 00:20:21.138 [2024-07-21 11:43:50.339554] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:21.138 [2024-07-21 11:43:50.339606] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395714 ] 00:20:21.138 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.138 [2024-07-21 11:43:50.424936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:21.138 [2024-07-21 11:43:50.463409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.138 [2024-07-21 11:43:50.463504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.138 [2024-07-21 11:43:50.463506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.395 [2024-07-21 11:43:50.634357] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:21.395 [2024-07-21 11:43:50.634387] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:21.395 I/O targets: 00:20:21.395 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:21.395 00:20:21.395 00:20:21.395 CUnit - A unit testing framework for C - Version 2.1-3 00:20:21.395 http://cunit.sourceforge.net/ 00:20:21.395 00:20:21.395 00:20:21.395 Suite: bdevio tests on: Nvme1n1 00:20:21.395 Test: blockdev write read block ...passed 00:20:21.395 Test: blockdev write zeroes read block ...passed 00:20:21.395 Test: blockdev write zeroes read no split ...passed 00:20:21.395 Test: blockdev write zeroes read split ...passed 00:20:21.395 Test: blockdev write zeroes read split partial ...passed 00:20:21.395 Test: blockdev reset ...[2024-07-21 11:43:50.664158] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.395 [2024-07-21 11:43:50.686941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:21.395 [2024-07-21 11:43:50.713456] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:21.395 passed 00:20:21.395 Test: blockdev write read 8 blocks ...passed 00:20:21.395 Test: blockdev write read size > 128k ...passed 00:20:21.395 Test: blockdev write read invalid size ...passed 00:20:21.395 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:21.395 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:21.395 Test: blockdev write read max offset ...passed 00:20:21.395 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:21.395 Test: blockdev writev readv 8 blocks ...passed 00:20:21.395 Test: blockdev writev readv 30 x 1block ...passed 00:20:21.395 Test: blockdev writev readv block ...passed 00:20:21.395 Test: blockdev writev readv size > 128k ...passed 00:20:21.395 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:21.395 Test: blockdev comparev and writev ...[2024-07-21 11:43:50.716352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:21.395 [2024-07-21 11:43:50.716380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.395 [2024-07-21 11:43:50.716393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:21.395 [2024-07-21 11:43:50.716403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.395 [2024-07-21 11:43:50.716580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:21.395 [2024-07-21 11:43:50.716592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:21.395 [2024-07-21 11:43:50.716603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:21.396 [2024-07-21 11:43:50.716612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:21.396 [2024-07-21 11:43:50.716770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:21.396 [2024-07-21 11:43:50.716781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:21.396 [2024-07-21 11:43:50.716792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:21.396 [2024-07-21 11:43:50.716802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:21.396 [2024-07-21 11:43:50.716965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:21.396 [2024-07-21 11:43:50.716976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:21.396 [2024-07-21 11:43:50.716986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:21.396 [2024-07-21 11:43:50.716995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:21.396 passed 00:20:21.396 Test: blockdev nvme passthru rw ...passed 00:20:21.396 Test: blockdev nvme passthru vendor specific ...[2024-07-21 11:43:50.717249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:21.396 [2024-07-21 11:43:50.717262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:21.396 [2024-07-21 11:43:50.717307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:21.396 [2024-07-21 11:43:50.717317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:21.396 [2024-07-21 11:43:50.717363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:21.396 [2024-07-21 11:43:50.717373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:21.396 [2024-07-21 11:43:50.717421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:21.396 [2024-07-21 11:43:50.717432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:21.396 passed 00:20:21.396 Test: blockdev nvme admin passthru ...passed 00:20:21.396 Test: blockdev copy ...passed 00:20:21.396 00:20:21.396 Run Summary: Type Total Ran Passed Failed Inactive 00:20:21.396 suites 1 1 n/a 0 0 00:20:21.396 tests 23 23 23 0 0 00:20:21.396 asserts 152 152 152 0 n/a 00:20:21.396 00:20:21.396 Elapsed time = 0.169 seconds 00:20:21.653 11:43:50 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:21.653 11:43:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:21.653 11:43:50 -- common/autotest_common.sh@10 -- # set +x 00:20:21.653 11:43:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:21.653 11:43:50 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:21.653 11:43:50 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:21.653 11:43:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:21.653 11:43:50 -- nvmf/common.sh@116 -- # sync 00:20:21.653 11:43:50 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:21.653 11:43:50 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:21.653 11:43:50 -- nvmf/common.sh@119 -- # set +e 00:20:21.653 11:43:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:21.653 11:43:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:21.653 rmmod nvme_rdma 00:20:21.653 rmmod nvme_fabrics 00:20:21.653 11:43:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:21.653 11:43:50 -- nvmf/common.sh@123 -- # set -e 00:20:21.653 11:43:50 -- nvmf/common.sh@124 -- # return 0 00:20:21.653 11:43:50 -- nvmf/common.sh@477 -- # '[' -n 2395430 ']' 00:20:21.653 11:43:50 -- nvmf/common.sh@478 -- # killprocess 2395430 00:20:21.653 11:43:50 -- common/autotest_common.sh@926 -- # '[' -z 2395430 ']' 00:20:21.653 11:43:50 -- common/autotest_common.sh@930 -- # kill -0 2395430 00:20:21.653 11:43:50 -- common/autotest_common.sh@931 -- # uname 00:20:21.653 11:43:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:21.653 11:43:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2395430 00:20:21.653 11:43:51 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:20:21.653 11:43:51 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:20:21.653 11:43:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2395430' 00:20:21.653 killing process with pid 2395430 00:20:21.653 11:43:51 -- common/autotest_common.sh@945 -- # kill 2395430 00:20:21.653 11:43:51 -- common/autotest_common.sh@950 -- # wait 2395430 00:20:21.910 11:43:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:21.910 11:43:51 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:21.910 00:20:21.910 real 0m9.898s 00:20:21.910 user 0m10.578s 00:20:21.910 sys 0m6.581s 00:20:21.910 11:43:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:21.910 11:43:51 -- common/autotest_common.sh@10 -- # set +x 00:20:21.910 ************************************ 00:20:21.910 END TEST nvmf_bdevio 00:20:21.910 ************************************ 00:20:21.910 11:43:51 -- nvmf/nvmf.sh@57 -- # '[' rdma = tcp ']' 00:20:21.910 11:43:51 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:20:21.910 11:43:51 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:21.910 11:43:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:21.910 11:43:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:21.910 11:43:51 -- common/autotest_common.sh@10 -- # set +x 00:20:22.166 ************************************ 00:20:22.166 START TEST nvmf_fuzz 00:20:22.166 ************************************ 00:20:22.166 11:43:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:22.166 * Looking for test storage... 00:20:22.166 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:22.166 11:43:51 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:22.166 11:43:51 -- nvmf/common.sh@7 -- # uname -s 00:20:22.166 11:43:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.166 11:43:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.166 11:43:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.166 11:43:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.166 11:43:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.166 11:43:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.166 11:43:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.166 11:43:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.166 11:43:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.166 11:43:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.166 11:43:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:22.166 11:43:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:22.166 11:43:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.166 11:43:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.166 11:43:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:22.166 11:43:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:22.166 11:43:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.166 11:43:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.166 11:43:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.166 11:43:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.166 11:43:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.166 11:43:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.166 11:43:51 -- paths/export.sh@5 -- # export PATH 00:20:22.167 11:43:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.167 11:43:51 -- nvmf/common.sh@46 -- # : 0 00:20:22.167 11:43:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:22.167 11:43:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:22.167 11:43:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:22.167 11:43:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.167 11:43:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.167 11:43:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:22.167 11:43:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:22.167 11:43:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:22.167 11:43:51 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:22.167 11:43:51 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:22.167 11:43:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.167 11:43:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:22.167 11:43:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:22.167 11:43:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:22.167 11:43:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.167 11:43:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.167 11:43:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.167 11:43:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:22.167 11:43:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:22.167 11:43:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:22.167 11:43:51 -- common/autotest_common.sh@10 -- # set +x 00:20:30.262 11:43:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:30.262 11:43:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:30.262 11:43:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:30.262 11:43:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:30.262 11:43:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:30.262 11:43:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:30.262 11:43:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:30.262 11:43:59 -- nvmf/common.sh@294 -- # net_devs=() 00:20:30.262 11:43:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:30.262 11:43:59 -- nvmf/common.sh@295 -- # e810=() 00:20:30.262 11:43:59 -- nvmf/common.sh@295 -- # local -ga e810 00:20:30.262 11:43:59 -- nvmf/common.sh@296 -- # x722=() 00:20:30.262 11:43:59 -- nvmf/common.sh@296 -- # local -ga x722 00:20:30.262 11:43:59 -- nvmf/common.sh@297 -- # mlx=() 00:20:30.262 11:43:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:30.262 11:43:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:30.262 11:43:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:30.262 11:43:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:30.262 11:43:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:30.262 11:43:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:30.262 11:43:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:30.262 11:43:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:30.262 11:43:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:30.263 11:43:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:30.263 11:43:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:30.263 11:43:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:30.263 11:43:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:30.263 11:43:59 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:30.263 11:43:59 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:30.263 11:43:59 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:30.263 11:43:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:30.263 11:43:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:30.263 11:43:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:30.263 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:30.263 11:43:59 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:30.263 11:43:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:30.263 11:43:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:30.263 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:30.263 11:43:59 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:30.263 11:43:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:30.263 11:43:59 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:30.263 11:43:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.263 11:43:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:30.263 11:43:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.263 11:43:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:30.263 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:30.263 11:43:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.263 11:43:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:30.263 11:43:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.263 11:43:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:30.263 11:43:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.263 11:43:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:30.263 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:30.263 11:43:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.263 11:43:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:30.263 11:43:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:30.263 11:43:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:30.263 11:43:59 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:30.263 11:43:59 -- nvmf/common.sh@57 -- # uname 00:20:30.263 11:43:59 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:30.263 11:43:59 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:30.263 11:43:59 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:30.263 11:43:59 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:30.263 11:43:59 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:30.263 11:43:59 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:30.263 11:43:59 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:30.263 11:43:59 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:30.263 11:43:59 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:30.263 11:43:59 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:30.263 11:43:59 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:30.263 11:43:59 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:30.263 11:43:59 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:30.263 11:43:59 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:30.263 11:43:59 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:30.263 11:43:59 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:30.263 11:43:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:30.263 11:43:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:30.263 11:43:59 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:30.263 11:43:59 -- nvmf/common.sh@104 -- # continue 2 00:20:30.263 11:43:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:30.263 11:43:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:30.263 11:43:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:30.263 11:43:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:30.263 11:43:59 -- nvmf/common.sh@104 -- # continue 2 00:20:30.263 11:43:59 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:30.263 11:43:59 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:30.263 11:43:59 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:30.263 11:43:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:30.263 11:43:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:30.263 11:43:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:30.263 11:43:59 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:30.263 11:43:59 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:30.263 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:30.263 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:30.263 altname enp217s0f0np0 00:20:30.263 altname ens818f0np0 00:20:30.263 inet 192.168.100.8/24 scope global mlx_0_0 00:20:30.263 valid_lft forever preferred_lft forever 00:20:30.263 11:43:59 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:30.263 11:43:59 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:30.263 11:43:59 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:30.263 11:43:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:30.263 11:43:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:30.263 11:43:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:30.263 11:43:59 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:30.263 11:43:59 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:30.263 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:30.263 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:30.263 altname enp217s0f1np1 00:20:30.263 altname ens818f1np1 00:20:30.263 inet 192.168.100.9/24 scope global mlx_0_1 00:20:30.263 valid_lft forever preferred_lft forever 00:20:30.263 11:43:59 -- nvmf/common.sh@410 -- # return 0 00:20:30.263 11:43:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:30.263 11:43:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:30.263 11:43:59 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:30.263 11:43:59 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:30.263 11:43:59 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:30.263 11:43:59 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:30.263 11:43:59 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:30.263 11:43:59 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:30.263 11:43:59 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:30.263 11:43:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:30.263 11:43:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:30.263 11:43:59 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:30.263 11:43:59 -- nvmf/common.sh@104 -- # continue 2 00:20:30.263 11:43:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:30.263 11:43:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:30.263 11:43:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:30.263 11:43:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:30.263 11:43:59 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:30.263 11:43:59 -- nvmf/common.sh@104 -- # continue 2 00:20:30.263 11:43:59 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:30.263 11:43:59 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:30.263 11:43:59 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:30.263 11:43:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:30.263 11:43:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:30.263 11:43:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:30.263 11:43:59 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:30.263 11:43:59 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:30.263 11:43:59 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:30.263 11:43:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:30.263 11:43:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:30.263 11:43:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:30.263 11:43:59 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:30.263 192.168.100.9' 00:20:30.263 11:43:59 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:30.263 192.168.100.9' 00:20:30.263 11:43:59 -- nvmf/common.sh@445 -- # head -n 1 00:20:30.521 11:43:59 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:30.521 11:43:59 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:30.521 192.168.100.9' 00:20:30.521 11:43:59 -- nvmf/common.sh@446 -- # tail -n +2 00:20:30.521 11:43:59 -- nvmf/common.sh@446 -- # head -n 1 00:20:30.521 11:43:59 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:30.521 11:43:59 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:30.521 11:43:59 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:30.521 11:43:59 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:30.521 11:43:59 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:30.521 11:43:59 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:30.521 11:43:59 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2399908 00:20:30.521 11:43:59 -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:30.521 11:43:59 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:30.521 11:43:59 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2399908 00:20:30.521 11:43:59 -- common/autotest_common.sh@819 -- # '[' -z 2399908 ']' 00:20:30.521 11:43:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.521 11:43:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:30.521 11:43:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.521 11:43:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:30.521 11:43:59 -- common/autotest_common.sh@10 -- # set +x 00:20:31.448 11:44:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:31.448 11:44:00 -- common/autotest_common.sh@852 -- # return 0 00:20:31.448 11:44:00 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:31.448 11:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.448 11:44:00 -- common/autotest_common.sh@10 -- # set +x 00:20:31.448 11:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.448 11:44:00 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:31.448 11:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.448 11:44:00 -- common/autotest_common.sh@10 -- # set +x 00:20:31.448 Malloc0 00:20:31.448 11:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.448 11:44:00 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:31.448 11:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.448 11:44:00 -- common/autotest_common.sh@10 -- # set +x 00:20:31.448 11:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.448 11:44:00 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:31.448 11:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.448 11:44:00 -- common/autotest_common.sh@10 -- # set +x 00:20:31.448 11:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.448 11:44:00 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:31.448 11:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.448 11:44:00 -- common/autotest_common.sh@10 -- # set +x 00:20:31.448 11:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.448 11:44:00 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:20:31.448 11:44:00 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:21:03.487 Fuzzing completed. Shutting down the fuzz application 00:21:03.487 00:21:03.487 Dumping successful admin opcodes: 00:21:03.487 8, 9, 10, 24, 00:21:03.487 Dumping successful io opcodes: 00:21:03.487 0, 9, 00:21:03.487 NS: 0x200003af1f00 I/O qp, Total commands completed: 1101628, total successful commands: 6468, random_seed: 203987264 00:21:03.487 NS: 0x200003af1f00 admin qp, Total commands completed: 139152, total successful commands: 1127, random_seed: 4262064896 00:21:03.487 11:44:31 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:03.487 Fuzzing completed. Shutting down the fuzz application 00:21:03.487 00:21:03.487 Dumping successful admin opcodes: 00:21:03.487 24, 00:21:03.487 Dumping successful io opcodes: 00:21:03.487 00:21:03.487 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1509275666 00:21:03.487 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1509351808 00:21:03.487 11:44:32 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:03.487 11:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.487 11:44:32 -- common/autotest_common.sh@10 -- # set +x 00:21:03.487 11:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.487 11:44:32 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:03.487 11:44:32 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:03.487 11:44:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:03.487 11:44:32 -- nvmf/common.sh@116 -- # sync 00:21:03.487 11:44:32 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:21:03.487 11:44:32 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:21:03.487 11:44:32 -- nvmf/common.sh@119 -- # set +e 00:21:03.487 11:44:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:03.487 11:44:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:21:03.487 rmmod nvme_rdma 00:21:03.487 rmmod nvme_fabrics 00:21:03.487 11:44:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:03.487 11:44:32 -- nvmf/common.sh@123 -- # set -e 00:21:03.487 11:44:32 -- nvmf/common.sh@124 -- # return 0 00:21:03.487 11:44:32 -- nvmf/common.sh@477 -- # '[' -n 2399908 ']' 00:21:03.487 11:44:32 -- nvmf/common.sh@478 -- # killprocess 2399908 00:21:03.487 11:44:32 -- common/autotest_common.sh@926 -- # '[' -z 2399908 ']' 00:21:03.487 11:44:32 -- common/autotest_common.sh@930 -- # kill -0 2399908 00:21:03.487 11:44:32 -- common/autotest_common.sh@931 -- # uname 00:21:03.487 11:44:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:03.487 11:44:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2399908 00:21:03.487 11:44:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:03.487 11:44:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:03.487 11:44:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2399908' 00:21:03.487 killing process with pid 2399908 00:21:03.487 11:44:32 -- common/autotest_common.sh@945 -- # kill 2399908 00:21:03.487 11:44:32 -- common/autotest_common.sh@950 -- # wait 2399908 00:21:03.487 11:44:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:03.487 11:44:32 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:21:03.487 11:44:32 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:03.487 00:21:03.487 real 0m41.456s 00:21:03.487 user 0m52.214s 00:21:03.487 sys 0m21.137s 00:21:03.487 11:44:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:03.487 11:44:32 -- common/autotest_common.sh@10 -- # set +x 00:21:03.487 ************************************ 00:21:03.487 END TEST nvmf_fuzz 00:21:03.487 ************************************ 00:21:03.487 11:44:32 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:21:03.487 11:44:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:03.487 11:44:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:03.487 11:44:32 -- common/autotest_common.sh@10 -- # set +x 00:21:03.487 ************************************ 00:21:03.487 START TEST nvmf_multiconnection 00:21:03.487 ************************************ 00:21:03.487 11:44:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:21:03.744 * Looking for test storage... 00:21:03.744 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:03.744 11:44:32 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:03.744 11:44:32 -- nvmf/common.sh@7 -- # uname -s 00:21:03.744 11:44:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.744 11:44:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.744 11:44:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.744 11:44:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.744 11:44:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.744 11:44:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.744 11:44:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.744 11:44:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.744 11:44:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.744 11:44:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.744 11:44:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:03.744 11:44:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:03.744 11:44:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.744 11:44:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.744 11:44:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:03.744 11:44:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:03.744 11:44:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.744 11:44:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.744 11:44:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.744 11:44:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.744 11:44:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.744 11:44:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.744 11:44:32 -- paths/export.sh@5 -- # export PATH 00:21:03.744 11:44:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.744 11:44:32 -- nvmf/common.sh@46 -- # : 0 00:21:03.744 11:44:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:03.744 11:44:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:03.744 11:44:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:03.744 11:44:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.744 11:44:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.744 11:44:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:03.744 11:44:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:03.744 11:44:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:03.744 11:44:32 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:03.744 11:44:32 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:03.744 11:44:32 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:03.744 11:44:32 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:03.744 11:44:32 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:21:03.744 11:44:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.744 11:44:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:03.744 11:44:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:03.744 11:44:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:03.744 11:44:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.744 11:44:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.744 11:44:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.744 11:44:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:03.744 11:44:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:03.744 11:44:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:03.744 11:44:32 -- common/autotest_common.sh@10 -- # set +x 00:21:11.862 11:44:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:11.862 11:44:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:11.862 11:44:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:11.862 11:44:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:11.862 11:44:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:11.862 11:44:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:11.862 11:44:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:11.862 11:44:41 -- nvmf/common.sh@294 -- # net_devs=() 00:21:11.862 11:44:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:11.862 11:44:41 -- nvmf/common.sh@295 -- # e810=() 00:21:11.863 11:44:41 -- nvmf/common.sh@295 -- # local -ga e810 00:21:11.863 11:44:41 -- nvmf/common.sh@296 -- # x722=() 00:21:11.863 11:44:41 -- nvmf/common.sh@296 -- # local -ga x722 00:21:11.863 11:44:41 -- nvmf/common.sh@297 -- # mlx=() 00:21:11.863 11:44:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:11.863 11:44:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.863 11:44:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.863 11:44:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.863 11:44:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.863 11:44:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.863 11:44:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.863 11:44:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.863 11:44:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.863 11:44:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.863 11:44:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.863 11:44:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.863 11:44:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:11.863 11:44:41 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:21:11.863 11:44:41 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:21:11.863 11:44:41 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:21:11.863 11:44:41 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:21:11.863 11:44:41 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:21:11.863 11:44:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:11.863 11:44:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:11.863 11:44:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:11.863 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:11.863 11:44:41 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:11.863 11:44:41 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:11.863 11:44:41 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:11.863 11:44:41 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:11.863 11:44:41 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:11.863 11:44:41 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:11.863 11:44:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:11.863 11:44:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:11.863 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:11.863 11:44:41 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:11.863 11:44:41 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:11.863 11:44:41 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:11.863 11:44:41 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:11.863 11:44:41 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:11.863 11:44:41 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:11.863 11:44:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:11.863 11:44:41 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:21:11.863 11:44:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:11.863 11:44:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.863 11:44:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:11.863 11:44:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.863 11:44:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:11.863 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:11.863 11:44:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.863 11:44:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:11.863 11:44:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.863 11:44:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:11.863 11:44:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.863 11:44:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:11.863 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:11.863 11:44:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.863 11:44:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:11.863 11:44:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:11.863 11:44:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:11.863 11:44:41 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:21:11.863 11:44:41 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:21:11.863 11:44:41 -- nvmf/common.sh@408 -- # rdma_device_init 00:21:11.863 11:44:41 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:21:11.863 11:44:41 -- nvmf/common.sh@57 -- # uname 00:21:11.863 11:44:41 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:21:11.863 11:44:41 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:21:11.863 11:44:41 -- nvmf/common.sh@62 -- # modprobe ib_core 00:21:11.863 11:44:41 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:21:11.863 11:44:41 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:21:11.863 11:44:41 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:21:11.863 11:44:41 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:21:11.863 11:44:41 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:21:11.863 11:44:41 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:21:11.863 11:44:41 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:11.863 11:44:41 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:21:11.863 11:44:41 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:11.863 11:44:41 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:11.863 11:44:41 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:11.863 11:44:41 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:12.121 11:44:41 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:12.121 11:44:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:12.121 11:44:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:12.121 11:44:41 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:12.121 11:44:41 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:12.121 11:44:41 -- nvmf/common.sh@104 -- # continue 2 00:21:12.121 11:44:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:12.121 11:44:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:12.121 11:44:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:12.121 11:44:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:12.121 11:44:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:12.121 11:44:41 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:12.121 11:44:41 -- nvmf/common.sh@104 -- # continue 2 00:21:12.121 11:44:41 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:12.121 11:44:41 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:21:12.121 11:44:41 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:12.121 11:44:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:12.121 11:44:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:12.121 11:44:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:12.121 11:44:41 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:21:12.122 11:44:41 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:21:12.122 11:44:41 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:21:12.122 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:12.122 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:12.122 altname enp217s0f0np0 00:21:12.122 altname ens818f0np0 00:21:12.122 inet 192.168.100.8/24 scope global mlx_0_0 00:21:12.122 valid_lft forever preferred_lft forever 00:21:12.122 11:44:41 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:12.122 11:44:41 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:21:12.122 11:44:41 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:12.122 11:44:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:12.122 11:44:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:12.122 11:44:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:12.122 11:44:41 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:21:12.122 11:44:41 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:21:12.122 11:44:41 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:21:12.122 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:12.122 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:12.122 altname enp217s0f1np1 00:21:12.122 altname ens818f1np1 00:21:12.122 inet 192.168.100.9/24 scope global mlx_0_1 00:21:12.122 valid_lft forever preferred_lft forever 00:21:12.122 11:44:41 -- nvmf/common.sh@410 -- # return 0 00:21:12.122 11:44:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:12.122 11:44:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:12.122 11:44:41 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:21:12.122 11:44:41 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:21:12.122 11:44:41 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:21:12.122 11:44:41 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:12.122 11:44:41 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:12.122 11:44:41 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:12.122 11:44:41 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:12.122 11:44:41 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:12.122 11:44:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:12.122 11:44:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:12.122 11:44:41 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:12.122 11:44:41 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:12.122 11:44:41 -- nvmf/common.sh@104 -- # continue 2 00:21:12.122 11:44:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:12.122 11:44:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:12.122 11:44:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:12.122 11:44:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:12.122 11:44:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:12.122 11:44:41 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:12.122 11:44:41 -- nvmf/common.sh@104 -- # continue 2 00:21:12.122 11:44:41 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:12.122 11:44:41 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:21:12.122 11:44:41 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:12.122 11:44:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:12.122 11:44:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:12.122 11:44:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:12.122 11:44:41 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:12.122 11:44:41 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:21:12.122 11:44:41 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:12.122 11:44:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:12.122 11:44:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:12.122 11:44:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:12.122 11:44:41 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:21:12.122 192.168.100.9' 00:21:12.122 11:44:41 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:21:12.122 192.168.100.9' 00:21:12.122 11:44:41 -- nvmf/common.sh@445 -- # head -n 1 00:21:12.122 11:44:41 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:12.122 11:44:41 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:12.122 192.168.100.9' 00:21:12.122 11:44:41 -- nvmf/common.sh@446 -- # tail -n +2 00:21:12.122 11:44:41 -- nvmf/common.sh@446 -- # head -n 1 00:21:12.122 11:44:41 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:12.122 11:44:41 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:21:12.122 11:44:41 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:12.122 11:44:41 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:21:12.122 11:44:41 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:21:12.122 11:44:41 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:21:12.122 11:44:41 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:12.122 11:44:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:12.122 11:44:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:12.122 11:44:41 -- common/autotest_common.sh@10 -- # set +x 00:21:12.122 11:44:41 -- nvmf/common.sh@469 -- # nvmfpid=2409481 00:21:12.122 11:44:41 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:12.122 11:44:41 -- nvmf/common.sh@470 -- # waitforlisten 2409481 00:21:12.122 11:44:41 -- common/autotest_common.sh@819 -- # '[' -z 2409481 ']' 00:21:12.122 11:44:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.122 11:44:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:12.122 11:44:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.122 11:44:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:12.122 11:44:41 -- common/autotest_common.sh@10 -- # set +x 00:21:12.122 [2024-07-21 11:44:41.519954] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:12.122 [2024-07-21 11:44:41.520006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.380 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.380 [2024-07-21 11:44:41.602151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.380 [2024-07-21 11:44:41.641078] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:12.380 [2024-07-21 11:44:41.641195] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.380 [2024-07-21 11:44:41.641205] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.380 [2024-07-21 11:44:41.641214] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.380 [2024-07-21 11:44:41.641259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.380 [2024-07-21 11:44:41.641356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.380 [2024-07-21 11:44:41.641379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.380 [2024-07-21 11:44:41.641381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.984 11:44:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:12.984 11:44:42 -- common/autotest_common.sh@852 -- # return 0 00:21:12.984 11:44:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:12.984 11:44:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:12.984 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:12.984 11:44:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.984 11:44:42 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:12.985 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.985 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:12.985 [2024-07-21 11:44:42.395944] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f084b0/0x1f0c9a0) succeed. 00:21:13.243 [2024-07-21 11:44:42.406208] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f09aa0/0x1f4e030) succeed. 00:21:13.243 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.243 11:44:42 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:13.243 11:44:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.243 11:44:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:13.243 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.243 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.243 Malloc1 00:21:13.243 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.243 11:44:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:13.243 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.243 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.243 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.243 11:44:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:13.243 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.243 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.243 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.243 11:44:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:13.243 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.243 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.243 [2024-07-21 11:44:42.581222] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:13.243 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.243 11:44:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.243 11:44:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:13.243 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.243 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.243 Malloc2 00:21:13.243 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.243 11:44:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:13.243 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.243 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.243 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.243 11:44:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:13.243 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.243 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.243 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.243 11:44:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:21:13.243 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.243 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.243 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.243 11:44:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.243 11:44:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:13.243 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.243 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.243 Malloc3 00:21:13.243 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.243 11:44:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:13.243 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.243 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.243 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.243 11:44:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:13.243 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.243 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.501 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.501 11:44:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:21:13.501 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.501 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.501 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.501 11:44:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.502 11:44:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 Malloc4 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.502 11:44:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 Malloc5 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.502 11:44:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 Malloc6 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.502 11:44:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 Malloc7 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.502 11:44:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 Malloc8 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.502 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.502 11:44:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.502 11:44:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:13.502 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.502 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.759 Malloc9 00:21:13.759 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.759 11:44:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:13.759 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.759 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.759 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.759 11:44:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:13.759 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.759 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.759 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.759 11:44:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:21:13.759 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.759 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.759 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.759 11:44:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.759 11:44:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:13.759 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.759 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.759 Malloc10 00:21:13.759 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.759 11:44:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:13.759 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.759 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.759 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.759 11:44:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:13.759 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.759 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.759 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.759 11:44:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:21:13.759 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.759 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.759 11:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.759 11:44:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.759 11:44:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:13.759 11:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.759 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:13.759 Malloc11 00:21:13.759 11:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.759 11:44:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:13.759 11:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.759 11:44:43 -- common/autotest_common.sh@10 -- # set +x 00:21:13.759 11:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.759 11:44:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:13.759 11:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.759 11:44:43 -- common/autotest_common.sh@10 -- # set +x 00:21:13.759 11:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.759 11:44:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:21:13.759 11:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.759 11:44:43 -- common/autotest_common.sh@10 -- # set +x 00:21:13.759 11:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.759 11:44:43 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:13.759 11:44:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.759 11:44:43 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:14.687 11:44:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:14.687 11:44:44 -- common/autotest_common.sh@1177 -- # local i=0 00:21:14.687 11:44:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:14.687 11:44:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:14.687 11:44:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:17.204 11:44:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:17.204 11:44:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:17.204 11:44:46 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:21:17.204 11:44:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:17.204 11:44:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:17.204 11:44:46 -- common/autotest_common.sh@1187 -- # return 0 00:21:17.204 11:44:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:17.204 11:44:46 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:21:17.766 11:44:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:17.766 11:44:47 -- common/autotest_common.sh@1177 -- # local i=0 00:21:17.766 11:44:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:17.766 11:44:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:17.766 11:44:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:19.655 11:44:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:19.655 11:44:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:19.655 11:44:49 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:21:19.655 11:44:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:19.655 11:44:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:19.655 11:44:49 -- common/autotest_common.sh@1187 -- # return 0 00:21:19.655 11:44:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:19.655 11:44:49 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:21:21.023 11:44:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:21.023 11:44:50 -- common/autotest_common.sh@1177 -- # local i=0 00:21:21.023 11:44:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:21.023 11:44:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:21.023 11:44:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:22.917 11:44:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:22.917 11:44:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:22.917 11:44:52 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:21:22.917 11:44:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:22.917 11:44:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:22.917 11:44:52 -- common/autotest_common.sh@1187 -- # return 0 00:21:22.917 11:44:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.917 11:44:52 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:21:23.848 11:44:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:23.848 11:44:53 -- common/autotest_common.sh@1177 -- # local i=0 00:21:23.848 11:44:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:23.848 11:44:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:23.848 11:44:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:25.740 11:44:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:25.740 11:44:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:25.740 11:44:55 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:21:25.740 11:44:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:25.740 11:44:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:25.740 11:44:55 -- common/autotest_common.sh@1187 -- # return 0 00:21:25.740 11:44:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:25.740 11:44:55 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:21:26.668 11:44:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:26.668 11:44:56 -- common/autotest_common.sh@1177 -- # local i=0 00:21:26.668 11:44:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:26.668 11:44:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:26.668 11:44:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:29.183 11:44:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:29.183 11:44:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:29.183 11:44:58 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:21:29.183 11:44:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:29.183 11:44:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:29.183 11:44:58 -- common/autotest_common.sh@1187 -- # return 0 00:21:29.183 11:44:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:29.183 11:44:58 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:21:29.753 11:44:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:29.753 11:44:59 -- common/autotest_common.sh@1177 -- # local i=0 00:21:29.753 11:44:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:29.753 11:44:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:29.753 11:44:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:31.680 11:45:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:31.680 11:45:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:31.680 11:45:01 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:21:31.680 11:45:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:31.680 11:45:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:31.680 11:45:01 -- common/autotest_common.sh@1187 -- # return 0 00:21:31.680 11:45:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.680 11:45:01 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:21:32.609 11:45:02 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:32.609 11:45:02 -- common/autotest_common.sh@1177 -- # local i=0 00:21:32.609 11:45:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:32.609 11:45:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:32.609 11:45:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:35.124 11:45:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:35.124 11:45:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:35.124 11:45:04 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:21:35.124 11:45:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:35.124 11:45:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:35.124 11:45:04 -- common/autotest_common.sh@1187 -- # return 0 00:21:35.124 11:45:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:35.124 11:45:04 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:21:35.686 11:45:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:35.686 11:45:05 -- common/autotest_common.sh@1177 -- # local i=0 00:21:35.686 11:45:05 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:35.686 11:45:05 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:35.686 11:45:05 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:38.204 11:45:07 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:38.204 11:45:07 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:38.204 11:45:07 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:21:38.204 11:45:07 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:38.204 11:45:07 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:38.204 11:45:07 -- common/autotest_common.sh@1187 -- # return 0 00:21:38.204 11:45:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:38.204 11:45:07 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:21:38.767 11:45:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:38.767 11:45:08 -- common/autotest_common.sh@1177 -- # local i=0 00:21:38.767 11:45:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:38.767 11:45:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:38.767 11:45:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:40.663 11:45:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:40.663 11:45:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:40.663 11:45:10 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:21:40.921 11:45:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:40.921 11:45:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:40.921 11:45:10 -- common/autotest_common.sh@1187 -- # return 0 00:21:40.921 11:45:10 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.921 11:45:10 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:21:41.853 11:45:11 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:41.854 11:45:11 -- common/autotest_common.sh@1177 -- # local i=0 00:21:41.854 11:45:11 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:41.854 11:45:11 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:41.854 11:45:11 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:43.746 11:45:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:43.746 11:45:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:43.746 11:45:13 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:21:43.746 11:45:13 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:43.746 11:45:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:43.746 11:45:13 -- common/autotest_common.sh@1187 -- # return 0 00:21:43.746 11:45:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:43.746 11:45:13 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:21:44.671 11:45:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:44.671 11:45:14 -- common/autotest_common.sh@1177 -- # local i=0 00:21:44.671 11:45:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:44.671 11:45:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:44.671 11:45:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:47.188 11:45:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:47.188 11:45:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:47.188 11:45:16 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:21:47.188 11:45:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:47.188 11:45:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:47.188 11:45:16 -- common/autotest_common.sh@1187 -- # return 0 00:21:47.188 11:45:16 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:47.188 [global] 00:21:47.188 thread=1 00:21:47.188 invalidate=1 00:21:47.188 rw=read 00:21:47.188 time_based=1 00:21:47.188 runtime=10 00:21:47.188 ioengine=libaio 00:21:47.188 direct=1 00:21:47.188 bs=262144 00:21:47.188 iodepth=64 00:21:47.188 norandommap=1 00:21:47.188 numjobs=1 00:21:47.188 00:21:47.188 [job0] 00:21:47.188 filename=/dev/nvme0n1 00:21:47.188 [job1] 00:21:47.188 filename=/dev/nvme10n1 00:21:47.188 [job2] 00:21:47.188 filename=/dev/nvme1n1 00:21:47.188 [job3] 00:21:47.188 filename=/dev/nvme2n1 00:21:47.188 [job4] 00:21:47.188 filename=/dev/nvme3n1 00:21:47.188 [job5] 00:21:47.188 filename=/dev/nvme4n1 00:21:47.188 [job6] 00:21:47.188 filename=/dev/nvme5n1 00:21:47.188 [job7] 00:21:47.188 filename=/dev/nvme6n1 00:21:47.188 [job8] 00:21:47.188 filename=/dev/nvme7n1 00:21:47.188 [job9] 00:21:47.188 filename=/dev/nvme8n1 00:21:47.188 [job10] 00:21:47.188 filename=/dev/nvme9n1 00:21:47.188 Could not set queue depth (nvme0n1) 00:21:47.188 Could not set queue depth (nvme10n1) 00:21:47.188 Could not set queue depth (nvme1n1) 00:21:47.188 Could not set queue depth (nvme2n1) 00:21:47.188 Could not set queue depth (nvme3n1) 00:21:47.188 Could not set queue depth (nvme4n1) 00:21:47.188 Could not set queue depth (nvme5n1) 00:21:47.188 Could not set queue depth (nvme6n1) 00:21:47.188 Could not set queue depth (nvme7n1) 00:21:47.188 Could not set queue depth (nvme8n1) 00:21:47.188 Could not set queue depth (nvme9n1) 00:21:47.446 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.446 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.446 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.446 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.446 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.446 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.446 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.446 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.446 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.446 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.446 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.446 fio-3.35 00:21:47.446 Starting 11 threads 00:21:59.718 00:21:59.718 job0: (groupid=0, jobs=1): err= 0: pid=2416342: Sun Jul 21 11:45:27 2024 00:21:59.718 read: IOPS=1402, BW=351MiB/s (368MB/s)(3515MiB/10027msec) 00:21:59.718 slat (usec): min=13, max=18957, avg=707.91, stdev=1863.49 00:21:59.718 clat (usec): min=7511, max=75425, avg=44888.63, stdev=9916.54 00:21:59.718 lat (usec): min=7753, max=77716, avg=45596.55, stdev=10185.74 00:21:59.718 clat percentiles (usec): 00:21:59.718 | 1.00th=[28443], 5.00th=[29492], 10.00th=[30278], 20.00th=[31851], 00:21:59.718 | 30.00th=[44303], 40.00th=[45351], 50.00th=[45876], 60.00th=[46924], 00:21:59.718 | 70.00th=[48497], 80.00th=[55313], 90.00th=[56886], 95.00th=[58459], 00:21:59.718 | 99.00th=[63177], 99.50th=[63701], 99.90th=[67634], 99.95th=[69731], 00:21:59.718 | 99.99th=[74974] 00:21:59.718 bw ( KiB/s): min=280064, max=512000, per=9.12%, avg=358348.80, stdev=68820.82, samples=20 00:21:59.718 iops : min= 1094, max= 2000, avg=1399.80, stdev=268.83, samples=20 00:21:59.718 lat (msec) : 10=0.18%, 20=0.38%, 50=72.82%, 100=26.63% 00:21:59.718 cpu : usr=0.40%, sys=5.90%, ctx=2623, majf=0, minf=3222 00:21:59.718 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:59.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.718 issued rwts: total=14061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.718 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.718 job1: (groupid=0, jobs=1): err= 0: pid=2416343: Sun Jul 21 11:45:27 2024 00:21:59.718 read: IOPS=1721, BW=430MiB/s (451MB/s)(4316MiB/10028msec) 00:21:59.718 slat (usec): min=13, max=17576, avg=575.89, stdev=1441.52 00:21:59.718 clat (usec): min=11043, max=63368, avg=36565.60, stdev=8819.73 00:21:59.718 lat (usec): min=11288, max=69155, avg=37141.49, stdev=9018.48 00:21:59.718 clat percentiles (usec): 00:21:59.718 | 1.00th=[26608], 5.00th=[27395], 10.00th=[27919], 20.00th=[28967], 00:21:59.718 | 30.00th=[30016], 40.00th=[30802], 50.00th=[32113], 60.00th=[33162], 00:21:59.718 | 70.00th=[45876], 80.00th=[46924], 90.00th=[48497], 95.00th=[50070], 00:21:59.718 | 99.00th=[53740], 99.50th=[55313], 99.90th=[60556], 99.95th=[62653], 00:21:59.718 | 99.99th=[63177] 00:21:59.718 bw ( KiB/s): min=327680, max=567296, per=11.21%, avg=440327.80, stdev=100447.92, samples=20 00:21:59.718 iops : min= 1280, max= 2216, avg=1720.00, stdev=392.41, samples=20 00:21:59.718 lat (msec) : 20=0.23%, 50=94.23%, 100=5.54% 00:21:59.718 cpu : usr=0.55%, sys=7.10%, ctx=3204, majf=0, minf=4097 00:21:59.718 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:59.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.718 issued rwts: total=17262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.718 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.718 job2: (groupid=0, jobs=1): err= 0: pid=2416345: Sun Jul 21 11:45:27 2024 00:21:59.718 read: IOPS=896, BW=224MiB/s (235MB/s)(2256MiB/10061msec) 00:21:59.718 slat (usec): min=17, max=21820, avg=1103.79, stdev=2640.69 00:21:59.718 clat (msec): min=13, max=150, avg=70.19, stdev=12.32 00:21:59.718 lat (msec): min=13, max=150, avg=71.29, stdev=12.70 00:21:59.718 clat percentiles (msec): 00:21:59.718 | 1.00th=[ 46], 5.00th=[ 53], 10.00th=[ 62], 20.00th=[ 63], 00:21:59.718 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 71], 00:21:59.718 | 70.00th=[ 73], 80.00th=[ 79], 90.00th=[ 93], 95.00th=[ 95], 00:21:59.718 | 99.00th=[ 101], 99.50th=[ 107], 99.90th=[ 127], 99.95th=[ 131], 00:21:59.718 | 99.99th=[ 150] 00:21:59.718 bw ( KiB/s): min=167424, max=272384, per=5.84%, avg=229373.90, stdev=31607.24, samples=20 00:21:59.718 iops : min= 654, max= 1064, avg=895.95, stdev=123.46, samples=20 00:21:59.718 lat (msec) : 20=0.21%, 50=4.52%, 100=94.04%, 250=1.23% 00:21:59.718 cpu : usr=0.47%, sys=4.65%, ctx=1824, majf=0, minf=4097 00:21:59.718 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:59.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.718 issued rwts: total=9022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.718 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.718 job3: (groupid=0, jobs=1): err= 0: pid=2416347: Sun Jul 21 11:45:27 2024 00:21:59.718 read: IOPS=1559, BW=390MiB/s (409MB/s)(3910MiB/10027msec) 00:21:59.718 slat (usec): min=12, max=32316, avg=625.50, stdev=1789.35 00:21:59.718 clat (usec): min=12921, max=93550, avg=40370.76, stdev=15454.26 00:21:59.718 lat (usec): min=13152, max=93593, avg=40996.26, stdev=15757.27 00:21:59.718 clat percentiles (usec): 00:21:59.718 | 1.00th=[26608], 5.00th=[27395], 10.00th=[27919], 20.00th=[28967], 00:21:59.718 | 30.00th=[29754], 40.00th=[30540], 50.00th=[31589], 60.00th=[32637], 00:21:59.718 | 70.00th=[48497], 80.00th=[62129], 90.00th=[63701], 95.00th=[65274], 00:21:59.719 | 99.00th=[71828], 99.50th=[77071], 99.90th=[83362], 99.95th=[86508], 00:21:59.719 | 99.99th=[93848] 00:21:59.719 bw ( KiB/s): min=238557, max=566784, per=10.15%, avg=398743.85, stdev=140452.54, samples=20 00:21:59.719 iops : min= 931, max= 2214, avg=1557.55, stdev=548.69, samples=20 00:21:59.719 lat (msec) : 20=0.34%, 50=70.01%, 100=29.65% 00:21:59.719 cpu : usr=0.49%, sys=6.35%, ctx=3047, majf=0, minf=4097 00:21:59.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:59.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.719 issued rwts: total=15638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.719 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.719 job4: (groupid=0, jobs=1): err= 0: pid=2416348: Sun Jul 21 11:45:27 2024 00:21:59.719 read: IOPS=1175, BW=294MiB/s (308MB/s)(2957MiB/10061msec) 00:21:59.719 slat (usec): min=13, max=44362, avg=827.13, stdev=2429.70 00:21:59.719 clat (msec): min=11, max=151, avg=53.56, stdev=14.92 00:21:59.719 lat (msec): min=11, max=151, avg=54.39, stdev=15.27 00:21:59.719 clat percentiles (msec): 00:21:59.719 | 1.00th=[ 31], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 47], 00:21:59.719 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 49], 60.00th=[ 51], 00:21:59.719 | 70.00th=[ 56], 80.00th=[ 57], 90.00th=[ 62], 95.00th=[ 94], 00:21:59.719 | 99.00th=[ 99], 99.50th=[ 104], 99.90th=[ 144], 99.95th=[ 148], 00:21:59.719 | 99.99th=[ 153] 00:21:59.719 bw ( KiB/s): min=165376, max=372224, per=7.67%, avg=301158.40, stdev=54072.99, samples=20 00:21:59.719 iops : min= 646, max= 1454, avg=1176.40, stdev=211.22, samples=20 00:21:59.719 lat (msec) : 20=0.35%, 50=57.50%, 100=41.52%, 250=0.63% 00:21:59.719 cpu : usr=0.47%, sys=5.62%, ctx=2400, majf=0, minf=4097 00:21:59.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:59.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.719 issued rwts: total=11827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.719 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.719 job5: (groupid=0, jobs=1): err= 0: pid=2416357: Sun Jul 21 11:45:27 2024 00:21:59.719 read: IOPS=1053, BW=263MiB/s (276MB/s)(2651MiB/10063msec) 00:21:59.719 slat (usec): min=14, max=34030, avg=934.36, stdev=2808.15 00:21:59.719 clat (msec): min=8, max=152, avg=59.73, stdev=18.32 00:21:59.719 lat (msec): min=9, max=152, avg=60.67, stdev=18.76 00:21:59.719 clat percentiles (msec): 00:21:59.719 | 1.00th=[ 31], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 46], 00:21:59.719 | 30.00th=[ 47], 40.00th=[ 47], 50.00th=[ 49], 60.00th=[ 65], 00:21:59.719 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 89], 95.00th=[ 94], 00:21:59.719 | 99.00th=[ 101], 99.50th=[ 110], 99.90th=[ 144], 99.95th=[ 150], 00:21:59.719 | 99.99th=[ 153] 00:21:59.719 bw ( KiB/s): min=161792, max=384512, per=6.87%, avg=269824.00, stdev=75795.12, samples=20 00:21:59.719 iops : min= 632, max= 1502, avg=1054.00, stdev=296.07, samples=20 00:21:59.719 lat (msec) : 10=0.08%, 20=0.36%, 50=53.15%, 100=45.42%, 250=0.99% 00:21:59.719 cpu : usr=0.40%, sys=4.79%, ctx=2091, majf=0, minf=4097 00:21:59.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:21:59.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.719 issued rwts: total=10603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.719 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.719 job6: (groupid=0, jobs=1): err= 0: pid=2416364: Sun Jul 21 11:45:27 2024 00:21:59.719 read: IOPS=3021, BW=755MiB/s (792MB/s)(7564MiB/10014msec) 00:21:59.719 slat (usec): min=12, max=20550, avg=325.39, stdev=1032.29 00:21:59.719 clat (usec): min=1388, max=76777, avg=20837.61, stdev=13188.71 00:21:59.719 lat (usec): min=1437, max=76823, avg=21162.99, stdev=13415.10 00:21:59.719 clat percentiles (usec): 00:21:59.719 | 1.00th=[12387], 5.00th=[13435], 10.00th=[13829], 20.00th=[14353], 00:21:59.719 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15401], 60.00th=[15664], 00:21:59.719 | 70.00th=[16057], 80.00th=[19006], 90.00th=[46924], 95.00th=[56361], 00:21:59.719 | 99.00th=[58459], 99.50th=[60031], 99.90th=[70779], 99.95th=[71828], 00:21:59.719 | 99.99th=[74974] 00:21:59.719 bw ( KiB/s): min=283136, max=1091072, per=19.67%, avg=772940.80, stdev=355484.15, samples=20 00:21:59.719 iops : min= 1106, max= 4262, avg=3019.30, stdev=1388.61, samples=20 00:21:59.719 lat (msec) : 2=0.02%, 4=0.11%, 10=0.53%, 20=79.49%, 50=10.16% 00:21:59.719 lat (msec) : 100=9.68% 00:21:59.719 cpu : usr=0.66%, sys=8.52%, ctx=5657, majf=0, minf=4097 00:21:59.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:59.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.719 issued rwts: total=30256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.719 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.719 job7: (groupid=0, jobs=1): err= 0: pid=2416369: Sun Jul 21 11:45:27 2024 00:21:59.719 read: IOPS=898, BW=225MiB/s (235MB/s)(2259MiB/10059msec) 00:21:59.719 slat (usec): min=12, max=34147, avg=1098.05, stdev=3133.01 00:21:59.719 clat (msec): min=13, max=154, avg=70.08, stdev=12.69 00:21:59.719 lat (msec): min=13, max=154, avg=71.18, stdev=13.18 00:21:59.719 clat percentiles (msec): 00:21:59.719 | 1.00th=[ 46], 5.00th=[ 51], 10.00th=[ 62], 20.00th=[ 63], 00:21:59.719 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 71], 00:21:59.719 | 70.00th=[ 73], 80.00th=[ 79], 90.00th=[ 93], 95.00th=[ 95], 00:21:59.719 | 99.00th=[ 101], 99.50th=[ 107], 99.90th=[ 146], 99.95th=[ 148], 00:21:59.719 | 99.99th=[ 155] 00:21:59.719 bw ( KiB/s): min=163328, max=271872, per=5.85%, avg=229732.50, stdev=31952.39, samples=20 00:21:59.719 iops : min= 638, max= 1062, avg=897.35, stdev=124.80, samples=20 00:21:59.719 lat (msec) : 20=0.38%, 50=4.57%, 100=94.02%, 250=1.03% 00:21:59.719 cpu : usr=0.31%, sys=4.24%, ctx=1767, majf=0, minf=4097 00:21:59.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:59.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.719 issued rwts: total=9036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.719 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.719 job8: (groupid=0, jobs=1): err= 0: pid=2416385: Sun Jul 21 11:45:27 2024 00:21:59.719 read: IOPS=1039, BW=260MiB/s (273MB/s)(2616MiB/10061msec) 00:21:59.719 slat (usec): min=13, max=28472, avg=935.73, stdev=2725.22 00:21:59.719 clat (msec): min=12, max=141, avg=60.54, stdev=17.67 00:21:59.719 lat (msec): min=12, max=141, avg=61.48, stdev=18.09 00:21:59.719 clat percentiles (msec): 00:21:59.719 | 1.00th=[ 43], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 46], 00:21:59.719 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 50], 60.00th=[ 70], 00:21:59.719 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 91], 95.00th=[ 94], 00:21:59.719 | 99.00th=[ 101], 99.50th=[ 113], 99.90th=[ 138], 99.95th=[ 142], 00:21:59.719 | 99.99th=[ 142] 00:21:59.719 bw ( KiB/s): min=165888, max=350208, per=6.78%, avg=266271.60, stdev=70599.20, samples=20 00:21:59.719 iops : min= 648, max= 1368, avg=1040.10, stdev=275.76, samples=20 00:21:59.719 lat (msec) : 20=0.20%, 50=51.10%, 100=47.62%, 250=1.07% 00:21:59.719 cpu : usr=0.44%, sys=4.85%, ctx=2188, majf=0, minf=4097 00:21:59.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:21:59.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.719 issued rwts: total=10463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.719 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.719 job9: (groupid=0, jobs=1): err= 0: pid=2416394: Sun Jul 21 11:45:27 2024 00:21:59.719 read: IOPS=896, BW=224MiB/s (235MB/s)(2254MiB/10060msec) 00:21:59.719 slat (usec): min=13, max=66994, avg=1109.62, stdev=3488.52 00:21:59.719 clat (msec): min=13, max=177, avg=70.24, stdev=12.70 00:21:59.719 lat (msec): min=13, max=177, avg=71.35, stdev=13.24 00:21:59.719 clat percentiles (msec): 00:21:59.719 | 1.00th=[ 46], 5.00th=[ 51], 10.00th=[ 62], 20.00th=[ 63], 00:21:59.719 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 68], 60.00th=[ 71], 00:21:59.719 | 70.00th=[ 73], 80.00th=[ 79], 90.00th=[ 93], 95.00th=[ 95], 00:21:59.719 | 99.00th=[ 102], 99.50th=[ 108], 99.90th=[ 148], 99.95th=[ 157], 00:21:59.719 | 99.99th=[ 178] 00:21:59.719 bw ( KiB/s): min=158720, max=277504, per=5.83%, avg=229145.60, stdev=32512.47, samples=20 00:21:59.719 iops : min= 620, max= 1084, avg=895.10, stdev=127.00, samples=20 00:21:59.719 lat (msec) : 20=0.21%, 50=4.73%, 100=93.80%, 250=1.26% 00:21:59.719 cpu : usr=0.28%, sys=4.42%, ctx=1777, majf=0, minf=4097 00:21:59.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:59.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.719 issued rwts: total=9015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.719 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.719 job10: (groupid=0, jobs=1): err= 0: pid=2416404: Sun Jul 21 11:45:27 2024 00:21:59.719 read: IOPS=1721, BW=430MiB/s (451MB/s)(4314MiB/10025msec) 00:21:59.719 slat (usec): min=11, max=37012, avg=572.18, stdev=2212.67 00:21:59.719 clat (usec): min=7823, max=95104, avg=36575.43, stdev=16892.27 00:21:59.719 lat (usec): min=7883, max=95148, avg=37147.61, stdev=17268.27 00:21:59.719 clat percentiles (usec): 00:21:59.719 | 1.00th=[12518], 5.00th=[13698], 10.00th=[14353], 20.00th=[15270], 00:21:59.719 | 30.00th=[16319], 40.00th=[31065], 50.00th=[45876], 60.00th=[46924], 00:21:59.719 | 70.00th=[47973], 80.00th=[51119], 90.00th=[55837], 95.00th=[56886], 00:21:59.719 | 99.00th=[59507], 99.50th=[62653], 99.90th=[88605], 99.95th=[90702], 00:21:59.719 | 99.99th=[93848] 00:21:59.719 bw ( KiB/s): min=279552, max=1100288, per=11.20%, avg=440115.20, stdev=252500.32, samples=20 00:21:59.719 iops : min= 1092, max= 4298, avg=1719.20, stdev=986.33, samples=20 00:21:59.719 lat (msec) : 10=0.54%, 20=32.03%, 50=45.22%, 100=22.21% 00:21:59.719 cpu : usr=0.53%, sys=6.23%, ctx=3239, majf=0, minf=4097 00:21:59.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:59.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.719 issued rwts: total=17255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.719 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.719 00:21:59.719 Run status group 0 (all jobs): 00:21:59.719 READ: bw=3837MiB/s (4023MB/s), 224MiB/s-755MiB/s (235MB/s-792MB/s), io=37.7GiB (40.5GB), run=10014-10063msec 00:21:59.719 00:21:59.719 Disk stats (read/write): 00:21:59.719 nvme0n1: ios=27562/0, merge=0/0, ticks=1221263/0, in_queue=1221263, util=96.77% 00:21:59.719 nvme10n1: ios=33932/0, merge=0/0, ticks=1219821/0, in_queue=1219821, util=96.98% 00:21:59.719 nvme1n1: ios=17755/0, merge=0/0, ticks=1220038/0, in_queue=1220038, util=97.30% 00:21:59.719 nvme2n1: ios=30710/0, merge=0/0, ticks=1222463/0, in_queue=1222463, util=97.49% 00:21:59.719 nvme3n1: ios=23370/0, merge=0/0, ticks=1218248/0, in_queue=1218248, util=97.58% 00:21:59.720 nvme4n1: ios=20953/0, merge=0/0, ticks=1220719/0, in_queue=1220719, util=98.01% 00:21:59.720 nvme5n1: ios=59448/0, merge=0/0, ticks=1217788/0, in_queue=1217788, util=98.19% 00:21:59.720 nvme6n1: ios=17765/0, merge=0/0, ticks=1217929/0, in_queue=1217929, util=98.33% 00:21:59.720 nvme7n1: ios=20638/0, merge=0/0, ticks=1219442/0, in_queue=1219442, util=98.85% 00:21:59.720 nvme8n1: ios=17786/0, merge=0/0, ticks=1222655/0, in_queue=1222655, util=99.11% 00:21:59.720 nvme9n1: ios=33902/0, merge=0/0, ticks=1220598/0, in_queue=1220598, util=99.26% 00:21:59.720 11:45:27 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:59.720 [global] 00:21:59.720 thread=1 00:21:59.720 invalidate=1 00:21:59.720 rw=randwrite 00:21:59.720 time_based=1 00:21:59.720 runtime=10 00:21:59.720 ioengine=libaio 00:21:59.720 direct=1 00:21:59.720 bs=262144 00:21:59.720 iodepth=64 00:21:59.720 norandommap=1 00:21:59.720 numjobs=1 00:21:59.720 00:21:59.720 [job0] 00:21:59.720 filename=/dev/nvme0n1 00:21:59.720 [job1] 00:21:59.720 filename=/dev/nvme10n1 00:21:59.720 [job2] 00:21:59.720 filename=/dev/nvme1n1 00:21:59.720 [job3] 00:21:59.720 filename=/dev/nvme2n1 00:21:59.720 [job4] 00:21:59.720 filename=/dev/nvme3n1 00:21:59.720 [job5] 00:21:59.720 filename=/dev/nvme4n1 00:21:59.720 [job6] 00:21:59.720 filename=/dev/nvme5n1 00:21:59.720 [job7] 00:21:59.720 filename=/dev/nvme6n1 00:21:59.720 [job8] 00:21:59.720 filename=/dev/nvme7n1 00:21:59.720 [job9] 00:21:59.720 filename=/dev/nvme8n1 00:21:59.720 [job10] 00:21:59.720 filename=/dev/nvme9n1 00:21:59.720 Could not set queue depth (nvme0n1) 00:21:59.720 Could not set queue depth (nvme10n1) 00:21:59.720 Could not set queue depth (nvme1n1) 00:21:59.720 Could not set queue depth (nvme2n1) 00:21:59.720 Could not set queue depth (nvme3n1) 00:21:59.720 Could not set queue depth (nvme4n1) 00:21:59.720 Could not set queue depth (nvme5n1) 00:21:59.720 Could not set queue depth (nvme6n1) 00:21:59.720 Could not set queue depth (nvme7n1) 00:21:59.720 Could not set queue depth (nvme8n1) 00:21:59.720 Could not set queue depth (nvme9n1) 00:21:59.720 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:59.720 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:59.720 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:59.720 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:59.720 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:59.720 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:59.720 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:59.720 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:59.720 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:59.720 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:59.720 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:59.720 fio-3.35 00:21:59.720 Starting 11 threads 00:22:09.690 00:22:09.690 job0: (groupid=0, jobs=1): err= 0: pid=2418235: Sun Jul 21 11:45:38 2024 00:22:09.690 write: IOPS=2081, BW=520MiB/s (546MB/s)(5245MiB/10078msec); 0 zone resets 00:22:09.690 slat (usec): min=15, max=83435, avg=473.86, stdev=1909.70 00:22:09.690 clat (msec): min=11, max=195, avg=30.26, stdev=23.78 00:22:09.690 lat (msec): min=11, max=195, avg=30.73, stdev=24.18 00:22:09.690 clat percentiles (msec): 00:22:09.690 | 1.00th=[ 17], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 18], 00:22:09.690 | 30.00th=[ 19], 40.00th=[ 19], 50.00th=[ 20], 60.00th=[ 21], 00:22:09.690 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 39], 95.00th=[ 111], 00:22:09.690 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 171], 99.95th=[ 180], 00:22:09.690 | 99.99th=[ 188] 00:22:09.690 bw ( KiB/s): min=129024, max=885760, per=15.73%, avg=535449.60, stdev=287376.65, samples=20 00:22:09.690 iops : min= 504, max= 3460, avg=2091.60, stdev=1122.57, samples=20 00:22:09.690 lat (msec) : 20=59.39%, 50=33.59%, 100=1.18%, 250=5.84% 00:22:09.690 cpu : usr=3.42%, sys=5.27%, ctx=4527, majf=0, minf=137 00:22:09.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:09.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:09.690 issued rwts: total=0,20979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.690 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:09.690 job1: (groupid=0, jobs=1): err= 0: pid=2418263: Sun Jul 21 11:45:38 2024 00:22:09.690 write: IOPS=696, BW=174MiB/s (183MB/s)(1756MiB/10079msec); 0 zone resets 00:22:09.690 slat (usec): min=22, max=47243, avg=1406.22, stdev=3758.09 00:22:09.690 clat (msec): min=14, max=192, avg=90.38, stdev=20.35 00:22:09.690 lat (msec): min=14, max=192, avg=91.79, stdev=20.86 00:22:09.690 clat percentiles (msec): 00:22:09.690 | 1.00th=[ 27], 5.00th=[ 55], 10.00th=[ 59], 20.00th=[ 74], 00:22:09.690 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 96], 00:22:09.690 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 113], 95.00th=[ 116], 00:22:09.690 | 99.00th=[ 132], 99.50th=[ 142], 99.90th=[ 165], 99.95th=[ 171], 00:22:09.690 | 99.99th=[ 192] 00:22:09.690 bw ( KiB/s): min=139264, max=302080, per=5.23%, avg=178201.60, stdev=41631.04, samples=20 00:22:09.690 iops : min= 544, max= 1180, avg=696.10, stdev=162.62, samples=20 00:22:09.690 lat (msec) : 20=0.41%, 50=1.20%, 100=60.20%, 250=38.19% 00:22:09.690 cpu : usr=1.76%, sys=3.08%, ctx=1777, majf=0, minf=144 00:22:09.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:09.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:09.690 issued rwts: total=0,7025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.690 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:09.690 job2: (groupid=0, jobs=1): err= 0: pid=2418282: Sun Jul 21 11:45:38 2024 00:22:09.690 write: IOPS=1141, BW=285MiB/s (299MB/s)(2866MiB/10039msec); 0 zone resets 00:22:09.690 slat (usec): min=24, max=8485, avg=867.69, stdev=1586.17 00:22:09.690 clat (usec): min=2430, max=83882, avg=55174.56, stdev=14814.37 00:22:09.690 lat (usec): min=2459, max=87847, avg=56042.25, stdev=15002.23 00:22:09.690 clat percentiles (usec): 00:22:09.690 | 1.00th=[33817], 5.00th=[35390], 10.00th=[36439], 20.00th=[38011], 00:22:09.690 | 30.00th=[47973], 40.00th=[50594], 50.00th=[52691], 60.00th=[55313], 00:22:09.690 | 70.00th=[69731], 80.00th=[73925], 90.00th=[74974], 95.00th=[76022], 00:22:09.690 | 99.00th=[78119], 99.50th=[78119], 99.90th=[81265], 99.95th=[81265], 00:22:09.690 | 99.99th=[83362] 00:22:09.690 bw ( KiB/s): min=215040, max=440320, per=8.57%, avg=291814.40, stdev=78457.60, samples=20 00:22:09.690 iops : min= 840, max= 1720, avg=1139.90, stdev=306.48, samples=20 00:22:09.690 lat (msec) : 4=0.03%, 10=0.14%, 20=0.07%, 50=37.32%, 100=62.44% 00:22:09.690 cpu : usr=2.78%, sys=4.99%, ctx=2799, majf=0, minf=18 00:22:09.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:09.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:09.690 issued rwts: total=0,11462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.690 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:09.690 job3: (groupid=0, jobs=1): err= 0: pid=2418293: Sun Jul 21 11:45:38 2024 00:22:09.690 write: IOPS=713, BW=178MiB/s (187MB/s)(1798MiB/10076msec); 0 zone resets 00:22:09.690 slat (usec): min=25, max=52483, avg=1386.01, stdev=4008.94 00:22:09.690 clat (msec): min=11, max=180, avg=88.27, stdev=22.54 00:22:09.690 lat (msec): min=11, max=180, avg=89.66, stdev=23.10 00:22:09.691 clat percentiles (msec): 00:22:09.691 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 65], 00:22:09.691 | 30.00th=[ 83], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 94], 00:22:09.691 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 113], 95.00th=[ 116], 00:22:09.691 | 99.00th=[ 142], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 180], 00:22:09.691 | 99.99th=[ 180] 00:22:09.691 bw ( KiB/s): min=131584, max=299520, per=5.36%, avg=182451.20, stdev=49077.63, samples=20 00:22:09.691 iops : min= 514, max= 1170, avg=712.70, stdev=191.71, samples=20 00:22:09.691 lat (msec) : 20=0.17%, 50=0.74%, 100=61.60%, 250=37.50% 00:22:09.691 cpu : usr=1.72%, sys=3.23%, ctx=1785, majf=0, minf=74 00:22:09.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:09.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:09.691 issued rwts: total=0,7190,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.691 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:09.691 job4: (groupid=0, jobs=1): err= 0: pid=2418298: Sun Jul 21 11:45:38 2024 00:22:09.691 write: IOPS=762, BW=191MiB/s (200MB/s)(1920MiB/10076msec); 0 zone resets 00:22:09.691 slat (usec): min=23, max=28168, avg=1295.30, stdev=3122.31 00:22:09.691 clat (msec): min=12, max=200, avg=82.66, stdev=27.99 00:22:09.691 lat (msec): min=12, max=200, avg=83.96, stdev=28.51 00:22:09.691 clat percentiles (msec): 00:22:09.691 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 40], 00:22:09.691 | 30.00th=[ 72], 40.00th=[ 86], 50.00th=[ 89], 60.00th=[ 92], 00:22:09.691 | 70.00th=[ 104], 80.00th=[ 108], 90.00th=[ 113], 95.00th=[ 116], 00:22:09.691 | 99.00th=[ 127], 99.50th=[ 138], 99.90th=[ 174], 99.95th=[ 176], 00:22:09.691 | 99.99th=[ 201] 00:22:09.691 bw ( KiB/s): min=133632, max=444928, per=5.73%, avg=194944.00, stdev=78418.10, samples=20 00:22:09.691 iops : min= 522, max= 1738, avg=761.50, stdev=306.32, samples=20 00:22:09.691 lat (msec) : 20=0.10%, 50=21.26%, 100=44.03%, 250=34.61% 00:22:09.691 cpu : usr=1.58%, sys=3.24%, ctx=1869, majf=0, minf=203 00:22:09.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:09.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:09.691 issued rwts: total=0,7678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.691 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:09.691 job5: (groupid=0, jobs=1): err= 0: pid=2418319: Sun Jul 21 11:45:38 2024 00:22:09.691 write: IOPS=896, BW=224MiB/s (235MB/s)(2258MiB/10075msec); 0 zone resets 00:22:09.691 slat (usec): min=19, max=50990, avg=1100.79, stdev=3467.99 00:22:09.691 clat (msec): min=4, max=184, avg=70.26, stdev=35.78 00:22:09.691 lat (msec): min=4, max=184, avg=71.36, stdev=36.43 00:22:09.691 clat percentiles (msec): 00:22:09.691 | 1.00th=[ 17], 5.00th=[ 19], 10.00th=[ 30], 20.00th=[ 35], 00:22:09.691 | 30.00th=[ 36], 40.00th=[ 38], 50.00th=[ 87], 60.00th=[ 89], 00:22:09.691 | 70.00th=[ 100], 80.00th=[ 107], 90.00th=[ 112], 95.00th=[ 115], 00:22:09.691 | 99.00th=[ 130], 99.50th=[ 144], 99.90th=[ 171], 99.95th=[ 180], 00:22:09.691 | 99.99th=[ 184] 00:22:09.691 bw ( KiB/s): min=134144, max=657699, per=6.75%, avg=229672.15, stdev=148210.32, samples=20 00:22:09.691 iops : min= 524, max= 2569, avg=897.15, stdev=578.93, samples=20 00:22:09.691 lat (msec) : 10=0.27%, 20=8.49%, 50=34.98%, 100=26.42%, 250=29.85% 00:22:09.691 cpu : usr=1.94%, sys=3.34%, ctx=2163, majf=0, minf=202 00:22:09.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:09.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:09.691 issued rwts: total=0,9032,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.691 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:09.691 job6: (groupid=0, jobs=1): err= 0: pid=2418329: Sun Jul 21 11:45:38 2024 00:22:09.691 write: IOPS=1110, BW=278MiB/s (291MB/s)(2786MiB/10036msec); 0 zone resets 00:22:09.691 slat (usec): min=18, max=21128, avg=869.75, stdev=1694.01 00:22:09.691 clat (usec): min=839, max=123179, avg=56759.40, stdev=17941.49 00:22:09.691 lat (usec): min=904, max=126816, avg=57629.15, stdev=18171.35 00:22:09.691 clat percentiles (msec): 00:22:09.691 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 38], 00:22:09.691 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 53], 60.00th=[ 58], 00:22:09.691 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 77], 95.00th=[ 79], 00:22:09.691 | 99.00th=[ 107], 99.50th=[ 109], 99.90th=[ 111], 99.95th=[ 121], 00:22:09.691 | 99.99th=[ 124] 00:22:09.691 bw ( KiB/s): min=155648, max=439808, per=8.33%, avg=283622.40, stdev=87484.84, samples=20 00:22:09.691 iops : min= 608, max= 1718, avg=1107.90, stdev=341.74, samples=20 00:22:09.691 lat (usec) : 1000=0.02% 00:22:09.691 lat (msec) : 2=0.08%, 4=0.29%, 10=0.39%, 20=0.05%, 50=36.87% 00:22:09.691 lat (msec) : 100=59.15%, 250=3.15% 00:22:09.691 cpu : usr=2.51%, sys=4.58%, ctx=2829, majf=0, minf=337 00:22:09.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:09.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:09.691 issued rwts: total=0,11142,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.691 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:09.691 job7: (groupid=0, jobs=1): err= 0: pid=2418337: Sun Jul 21 11:45:38 2024 00:22:09.691 write: IOPS=3190, BW=798MiB/s (836MB/s)(7986MiB/10011msec); 0 zone resets 00:22:09.691 slat (usec): min=15, max=8219, avg=306.77, stdev=612.00 00:22:09.691 clat (usec): min=6747, max=75860, avg=19745.93, stdev=6880.88 00:22:09.691 lat (usec): min=6801, max=76810, avg=20052.70, stdev=6974.31 00:22:09.691 clat percentiles (usec): 00:22:09.691 | 1.00th=[15008], 5.00th=[15664], 10.00th=[15926], 20.00th=[16450], 00:22:09.691 | 30.00th=[17171], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:22:09.691 | 70.00th=[18744], 80.00th=[19268], 90.00th=[32637], 95.00th=[35914], 00:22:09.691 | 99.00th=[39060], 99.50th=[65799], 99.90th=[69731], 99.95th=[71828], 00:22:09.691 | 99.99th=[73925] 00:22:09.691 bw ( KiB/s): min=358912, max=996864, per=23.74%, avg=808313.26, stdev=207355.70, samples=19 00:22:09.691 iops : min= 1402, max= 3894, avg=3157.47, stdev=809.98, samples=19 00:22:09.691 lat (msec) : 10=0.15%, 20=87.55%, 50=11.43%, 100=0.87% 00:22:09.691 cpu : usr=4.55%, sys=7.31%, ctx=6632, majf=0, minf=80 00:22:09.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:09.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:09.691 issued rwts: total=0,31942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.691 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:09.691 job8: (groupid=0, jobs=1): err= 0: pid=2418360: Sun Jul 21 11:45:38 2024 00:22:09.691 write: IOPS=811, BW=203MiB/s (213MB/s)(2044MiB/10080msec); 0 zone resets 00:22:09.691 slat (usec): min=23, max=50168, avg=1204.05, stdev=3431.16 00:22:09.691 clat (msec): min=11, max=189, avg=77.67, stdev=23.03 00:22:09.691 lat (msec): min=11, max=189, avg=78.87, stdev=23.53 00:22:09.691 clat percentiles (msec): 00:22:09.691 | 1.00th=[ 39], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 56], 00:22:09.691 | 30.00th=[ 59], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 75], 00:22:09.691 | 70.00th=[ 79], 80.00th=[ 107], 90.00th=[ 112], 95.00th=[ 114], 00:22:09.691 | 99.00th=[ 124], 99.50th=[ 146], 99.90th=[ 182], 99.95th=[ 188], 00:22:09.691 | 99.99th=[ 190] 00:22:09.691 bw ( KiB/s): min=136192, max=310784, per=6.10%, avg=207692.80, stdev=58479.77, samples=20 00:22:09.691 iops : min= 532, max= 1214, avg=811.30, stdev=228.44, samples=20 00:22:09.691 lat (msec) : 20=0.16%, 50=2.34%, 100=69.64%, 250=27.86% 00:22:09.691 cpu : usr=1.94%, sys=3.24%, ctx=2047, majf=0, minf=13 00:22:09.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:09.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:09.691 issued rwts: total=0,8176,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.691 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:09.692 job9: (groupid=0, jobs=1): err= 0: pid=2418372: Sun Jul 21 11:45:38 2024 00:22:09.692 write: IOPS=715, BW=179MiB/s (188MB/s)(1803MiB/10080msec); 0 zone resets 00:22:09.692 slat (usec): min=22, max=32497, avg=1381.31, stdev=3505.49 00:22:09.692 clat (msec): min=10, max=189, avg=88.02, stdev=22.03 00:22:09.692 lat (msec): min=10, max=189, avg=89.40, stdev=22.54 00:22:09.692 clat percentiles (msec): 00:22:09.692 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 66], 00:22:09.692 | 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 94], 00:22:09.692 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 113], 95.00th=[ 115], 00:22:09.692 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 180], 99.95th=[ 180], 00:22:09.692 | 99.99th=[ 190] 00:22:09.692 bw ( KiB/s): min=138752, max=301056, per=5.38%, avg=183040.00, stdev=48913.24, samples=20 00:22:09.692 iops : min= 542, max= 1176, avg=715.00, stdev=191.07, samples=20 00:22:09.692 lat (msec) : 20=0.18%, 50=0.87%, 100=61.57%, 250=37.38% 00:22:09.692 cpu : usr=1.85%, sys=3.08%, ctx=1784, majf=0, minf=203 00:22:09.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:09.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:09.692 issued rwts: total=0,7213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.692 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:09.692 job10: (groupid=0, jobs=1): err= 0: pid=2418381: Sun Jul 21 11:45:38 2024 00:22:09.692 write: IOPS=1216, BW=304MiB/s (319MB/s)(3052MiB/10036msec); 0 zone resets 00:22:09.692 slat (usec): min=23, max=12431, avg=793.86, stdev=1483.90 00:22:09.692 clat (usec): min=5721, max=81902, avg=51803.59, stdev=16255.46 00:22:09.692 lat (usec): min=5788, max=85820, avg=52597.45, stdev=16485.74 00:22:09.692 clat percentiles (usec): 00:22:09.692 | 1.00th=[20317], 5.00th=[33817], 10.00th=[35390], 20.00th=[36439], 00:22:09.692 | 30.00th=[37487], 40.00th=[40109], 50.00th=[50070], 60.00th=[52167], 00:22:09.692 | 70.00th=[66323], 80.00th=[72877], 90.00th=[74974], 95.00th=[76022], 00:22:09.692 | 99.00th=[78119], 99.50th=[78119], 99.90th=[79168], 99.95th=[80217], 00:22:09.692 | 99.99th=[82314] 00:22:09.692 bw ( KiB/s): min=215040, max=458240, per=9.13%, avg=310912.00, stdev=90447.28, samples=20 00:22:09.692 iops : min= 840, max= 1790, avg=1214.50, stdev=353.31, samples=20 00:22:09.692 lat (msec) : 10=0.07%, 20=0.90%, 50=48.74%, 100=50.29% 00:22:09.692 cpu : usr=2.90%, sys=4.34%, ctx=3161, majf=0, minf=281 00:22:09.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:09.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:09.692 issued rwts: total=0,12208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.692 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:09.692 00:22:09.692 Run status group 0 (all jobs): 00:22:09.692 WRITE: bw=3325MiB/s (3486MB/s), 174MiB/s-798MiB/s (183MB/s-836MB/s), io=32.7GiB (35.1GB), run=10011-10080msec 00:22:09.692 00:22:09.692 Disk stats (read/write): 00:22:09.692 nvme0n1: ios=49/41643, merge=0/0, ticks=17/1212399, in_queue=1212416, util=96.44% 00:22:09.692 nvme10n1: ios=0/13746, merge=0/0, ticks=0/1207118, in_queue=1207118, util=96.60% 00:22:09.692 nvme1n1: ios=0/22415, merge=0/0, ticks=0/1212784, in_queue=1212784, util=97.00% 00:22:09.692 nvme2n1: ios=0/14109, merge=0/0, ticks=0/1208911, in_queue=1208911, util=97.31% 00:22:09.692 nvme3n1: ios=0/15068, merge=0/0, ticks=0/1209728, in_queue=1209728, util=97.40% 00:22:09.692 nvme4n1: ios=0/17771, merge=0/0, ticks=0/1208686, in_queue=1208686, util=97.81% 00:22:09.692 nvme5n1: ios=0/21771, merge=0/0, ticks=0/1213737, in_queue=1213737, util=98.02% 00:22:09.692 nvme6n1: ios=0/62551, merge=0/0, ticks=0/1227133, in_queue=1227133, util=98.16% 00:22:09.692 nvme7n1: ios=0/16030, merge=0/0, ticks=0/1208446, in_queue=1208446, util=98.65% 00:22:09.692 nvme8n1: ios=0/14120, merge=0/0, ticks=0/1208149, in_queue=1208149, util=98.88% 00:22:09.692 nvme9n1: ios=0/23899, merge=0/0, ticks=0/1212093, in_queue=1212093, util=99.04% 00:22:09.692 11:45:38 -- target/multiconnection.sh@36 -- # sync 00:22:09.692 11:45:38 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:09.692 11:45:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:09.692 11:45:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:09.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:09.950 11:45:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:09.950 11:45:39 -- common/autotest_common.sh@1198 -- # local i=0 00:22:09.950 11:45:39 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:09.950 11:45:39 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:22:09.950 11:45:39 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:09.950 11:45:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:22:09.950 11:45:39 -- common/autotest_common.sh@1210 -- # return 0 00:22:09.950 11:45:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:09.950 11:45:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.950 11:45:39 -- common/autotest_common.sh@10 -- # set +x 00:22:09.950 11:45:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.950 11:45:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:09.950 11:45:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:10.888 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:10.888 11:45:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:10.888 11:45:40 -- common/autotest_common.sh@1198 -- # local i=0 00:22:10.888 11:45:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:10.888 11:45:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:22:10.888 11:45:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:10.888 11:45:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:22:10.888 11:45:40 -- common/autotest_common.sh@1210 -- # return 0 00:22:10.888 11:45:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:10.888 11:45:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.888 11:45:40 -- common/autotest_common.sh@10 -- # set +x 00:22:10.888 11:45:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.888 11:45:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:10.888 11:45:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:12.265 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:12.265 11:45:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:12.265 11:45:41 -- common/autotest_common.sh@1198 -- # local i=0 00:22:12.265 11:45:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:12.265 11:45:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:22:12.265 11:45:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:22:12.265 11:45:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:12.265 11:45:41 -- common/autotest_common.sh@1210 -- # return 0 00:22:12.265 11:45:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:12.265 11:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.265 11:45:41 -- common/autotest_common.sh@10 -- # set +x 00:22:12.265 11:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.265 11:45:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.265 11:45:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:13.201 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:13.201 11:45:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:13.201 11:45:42 -- common/autotest_common.sh@1198 -- # local i=0 00:22:13.201 11:45:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:13.201 11:45:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:22:13.201 11:45:42 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:13.201 11:45:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:22:13.201 11:45:42 -- common/autotest_common.sh@1210 -- # return 0 00:22:13.201 11:45:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:13.201 11:45:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.201 11:45:42 -- common/autotest_common.sh@10 -- # set +x 00:22:13.201 11:45:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.201 11:45:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:13.201 11:45:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:14.141 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:14.141 11:45:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:14.141 11:45:43 -- common/autotest_common.sh@1198 -- # local i=0 00:22:14.141 11:45:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:14.141 11:45:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:22:14.141 11:45:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:14.141 11:45:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:22:14.141 11:45:43 -- common/autotest_common.sh@1210 -- # return 0 00:22:14.141 11:45:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:14.141 11:45:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.141 11:45:43 -- common/autotest_common.sh@10 -- # set +x 00:22:14.141 11:45:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.141 11:45:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.141 11:45:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:15.077 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:15.077 11:45:44 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:15.077 11:45:44 -- common/autotest_common.sh@1198 -- # local i=0 00:22:15.077 11:45:44 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:15.077 11:45:44 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:22:15.077 11:45:44 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:15.077 11:45:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:22:15.077 11:45:44 -- common/autotest_common.sh@1210 -- # return 0 00:22:15.077 11:45:44 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:15.077 11:45:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.077 11:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:15.077 11:45:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.077 11:45:44 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:15.077 11:45:44 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:16.015 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:16.015 11:45:45 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:16.015 11:45:45 -- common/autotest_common.sh@1198 -- # local i=0 00:22:16.015 11:45:45 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:16.015 11:45:45 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:22:16.015 11:45:45 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:16.015 11:45:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:22:16.015 11:45:45 -- common/autotest_common.sh@1210 -- # return 0 00:22:16.015 11:45:45 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:16.015 11:45:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:16.015 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:22:16.015 11:45:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:16.015 11:45:45 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:16.015 11:45:45 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:16.950 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:16.950 11:45:46 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:16.950 11:45:46 -- common/autotest_common.sh@1198 -- # local i=0 00:22:16.950 11:45:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:16.950 11:45:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:22:16.950 11:45:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:16.950 11:45:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:22:16.950 11:45:46 -- common/autotest_common.sh@1210 -- # return 0 00:22:16.950 11:45:46 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:16.950 11:45:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:16.950 11:45:46 -- common/autotest_common.sh@10 -- # set +x 00:22:16.950 11:45:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:16.950 11:45:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:16.950 11:45:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:17.885 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:17.885 11:45:47 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:17.885 11:45:47 -- common/autotest_common.sh@1198 -- # local i=0 00:22:17.885 11:45:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:17.885 11:45:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:22:17.885 11:45:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:22:17.885 11:45:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:17.885 11:45:47 -- common/autotest_common.sh@1210 -- # return 0 00:22:17.885 11:45:47 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:17.885 11:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.885 11:45:47 -- common/autotest_common.sh@10 -- # set +x 00:22:18.143 11:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:18.143 11:45:47 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:18.143 11:45:47 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:19.095 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:19.095 11:45:48 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:19.095 11:45:48 -- common/autotest_common.sh@1198 -- # local i=0 00:22:19.095 11:45:48 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:19.095 11:45:48 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:22:19.095 11:45:48 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:19.095 11:45:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:22:19.095 11:45:48 -- common/autotest_common.sh@1210 -- # return 0 00:22:19.095 11:45:48 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:19.095 11:45:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.095 11:45:48 -- common/autotest_common.sh@10 -- # set +x 00:22:19.095 11:45:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.095 11:45:48 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:19.095 11:45:48 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:20.031 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:20.031 11:45:49 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:20.031 11:45:49 -- common/autotest_common.sh@1198 -- # local i=0 00:22:20.031 11:45:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:20.031 11:45:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:22:20.031 11:45:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:20.031 11:45:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:22:20.031 11:45:49 -- common/autotest_common.sh@1210 -- # return 0 00:22:20.031 11:45:49 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:20.031 11:45:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.031 11:45:49 -- common/autotest_common.sh@10 -- # set +x 00:22:20.031 11:45:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.031 11:45:49 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:20.031 11:45:49 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:20.031 11:45:49 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:20.031 11:45:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:20.031 11:45:49 -- nvmf/common.sh@116 -- # sync 00:22:20.031 11:45:49 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:22:20.031 11:45:49 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:22:20.031 11:45:49 -- nvmf/common.sh@119 -- # set +e 00:22:20.031 11:45:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:20.031 11:45:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:22:20.031 rmmod nvme_rdma 00:22:20.031 rmmod nvme_fabrics 00:22:20.031 11:45:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:20.031 11:45:49 -- nvmf/common.sh@123 -- # set -e 00:22:20.031 11:45:49 -- nvmf/common.sh@124 -- # return 0 00:22:20.031 11:45:49 -- nvmf/common.sh@477 -- # '[' -n 2409481 ']' 00:22:20.031 11:45:49 -- nvmf/common.sh@478 -- # killprocess 2409481 00:22:20.031 11:45:49 -- common/autotest_common.sh@926 -- # '[' -z 2409481 ']' 00:22:20.031 11:45:49 -- common/autotest_common.sh@930 -- # kill -0 2409481 00:22:20.031 11:45:49 -- common/autotest_common.sh@931 -- # uname 00:22:20.031 11:45:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:20.031 11:45:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2409481 00:22:20.031 11:45:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:20.031 11:45:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:20.031 11:45:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2409481' 00:22:20.031 killing process with pid 2409481 00:22:20.031 11:45:49 -- common/autotest_common.sh@945 -- # kill 2409481 00:22:20.031 11:45:49 -- common/autotest_common.sh@950 -- # wait 2409481 00:22:20.598 11:45:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:20.598 11:45:49 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:22:20.598 00:22:20.598 real 1m17.065s 00:22:20.598 user 4m54.318s 00:22:20.598 sys 0m20.960s 00:22:20.598 11:45:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:20.598 11:45:49 -- common/autotest_common.sh@10 -- # set +x 00:22:20.598 ************************************ 00:22:20.598 END TEST nvmf_multiconnection 00:22:20.598 ************************************ 00:22:20.598 11:45:49 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:22:20.598 11:45:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:20.598 11:45:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:20.598 11:45:49 -- common/autotest_common.sh@10 -- # set +x 00:22:20.598 ************************************ 00:22:20.598 START TEST nvmf_initiator_timeout 00:22:20.598 ************************************ 00:22:20.598 11:45:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:22:20.857 * Looking for test storage... 00:22:20.857 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:20.857 11:45:50 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.857 11:45:50 -- nvmf/common.sh@7 -- # uname -s 00:22:20.857 11:45:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.857 11:45:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.857 11:45:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.857 11:45:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.857 11:45:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.857 11:45:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.857 11:45:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.857 11:45:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.857 11:45:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.857 11:45:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.857 11:45:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:20.857 11:45:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:20.857 11:45:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.857 11:45:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.857 11:45:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.857 11:45:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:20.857 11:45:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.858 11:45:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.858 11:45:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.858 11:45:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.858 11:45:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.858 11:45:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.858 11:45:50 -- paths/export.sh@5 -- # export PATH 00:22:20.858 11:45:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.858 11:45:50 -- nvmf/common.sh@46 -- # : 0 00:22:20.858 11:45:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:20.858 11:45:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:20.858 11:45:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:20.858 11:45:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.858 11:45:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.858 11:45:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:20.858 11:45:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:20.858 11:45:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:20.858 11:45:50 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:20.858 11:45:50 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:20.858 11:45:50 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:20.858 11:45:50 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:22:20.858 11:45:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.858 11:45:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:20.858 11:45:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:20.858 11:45:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:20.858 11:45:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.858 11:45:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.858 11:45:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.858 11:45:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:20.858 11:45:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:20.858 11:45:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:20.858 11:45:50 -- common/autotest_common.sh@10 -- # set +x 00:22:29.018 11:45:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:29.018 11:45:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:29.018 11:45:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:29.018 11:45:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:29.018 11:45:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:29.018 11:45:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:29.018 11:45:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:29.018 11:45:58 -- nvmf/common.sh@294 -- # net_devs=() 00:22:29.018 11:45:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:29.018 11:45:58 -- nvmf/common.sh@295 -- # e810=() 00:22:29.018 11:45:58 -- nvmf/common.sh@295 -- # local -ga e810 00:22:29.018 11:45:58 -- nvmf/common.sh@296 -- # x722=() 00:22:29.018 11:45:58 -- nvmf/common.sh@296 -- # local -ga x722 00:22:29.018 11:45:58 -- nvmf/common.sh@297 -- # mlx=() 00:22:29.018 11:45:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:29.018 11:45:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.018 11:45:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.018 11:45:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.018 11:45:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.018 11:45:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.018 11:45:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.018 11:45:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.018 11:45:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.018 11:45:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.018 11:45:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.018 11:45:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.018 11:45:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:29.018 11:45:58 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:22:29.018 11:45:58 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:22:29.018 11:45:58 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:22:29.018 11:45:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:29.018 11:45:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:29.018 11:45:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:29.018 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:29.018 11:45:58 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:29.018 11:45:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:29.018 11:45:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:29.018 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:29.018 11:45:58 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:29.018 11:45:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:29.018 11:45:58 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:29.018 11:45:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.018 11:45:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:29.018 11:45:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.018 11:45:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:29.018 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:29.018 11:45:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.018 11:45:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:29.018 11:45:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.018 11:45:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:29.018 11:45:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.018 11:45:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:29.018 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:29.018 11:45:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.018 11:45:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:29.018 11:45:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:29.018 11:45:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@408 -- # rdma_device_init 00:22:29.018 11:45:58 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:22:29.018 11:45:58 -- nvmf/common.sh@57 -- # uname 00:22:29.018 11:45:58 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:22:29.018 11:45:58 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:22:29.018 11:45:58 -- nvmf/common.sh@62 -- # modprobe ib_core 00:22:29.018 11:45:58 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:22:29.018 11:45:58 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:22:29.018 11:45:58 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:22:29.018 11:45:58 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:22:29.018 11:45:58 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:22:29.018 11:45:58 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:22:29.018 11:45:58 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:29.018 11:45:58 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:22:29.018 11:45:58 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:29.018 11:45:58 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:29.018 11:45:58 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:29.018 11:45:58 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:29.018 11:45:58 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:29.018 11:45:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:29.018 11:45:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:29.018 11:45:58 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:29.018 11:45:58 -- nvmf/common.sh@104 -- # continue 2 00:22:29.018 11:45:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:29.018 11:45:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:29.018 11:45:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:29.018 11:45:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:29.018 11:45:58 -- nvmf/common.sh@104 -- # continue 2 00:22:29.018 11:45:58 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:29.018 11:45:58 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:22:29.018 11:45:58 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:29.018 11:45:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:29.018 11:45:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:29.018 11:45:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:29.018 11:45:58 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:22:29.018 11:45:58 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:22:29.018 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:29.018 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:29.018 altname enp217s0f0np0 00:22:29.018 altname ens818f0np0 00:22:29.018 inet 192.168.100.8/24 scope global mlx_0_0 00:22:29.018 valid_lft forever preferred_lft forever 00:22:29.018 11:45:58 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:29.018 11:45:58 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:22:29.018 11:45:58 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:29.018 11:45:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:29.018 11:45:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:29.018 11:45:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:29.018 11:45:58 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:22:29.018 11:45:58 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:22:29.018 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:29.018 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:29.018 altname enp217s0f1np1 00:22:29.018 altname ens818f1np1 00:22:29.018 inet 192.168.100.9/24 scope global mlx_0_1 00:22:29.018 valid_lft forever preferred_lft forever 00:22:29.018 11:45:58 -- nvmf/common.sh@410 -- # return 0 00:22:29.018 11:45:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:29.018 11:45:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:29.018 11:45:58 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:22:29.018 11:45:58 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:22:29.018 11:45:58 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:29.018 11:45:58 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:29.018 11:45:58 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:29.018 11:45:58 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:29.018 11:45:58 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:29.018 11:45:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:29.018 11:45:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:29.018 11:45:58 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:29.018 11:45:58 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:29.018 11:45:58 -- nvmf/common.sh@104 -- # continue 2 00:22:29.018 11:45:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:29.018 11:45:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:29.018 11:45:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:29.019 11:45:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:29.019 11:45:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:29.019 11:45:58 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:29.019 11:45:58 -- nvmf/common.sh@104 -- # continue 2 00:22:29.019 11:45:58 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:29.019 11:45:58 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:22:29.019 11:45:58 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:29.019 11:45:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:29.019 11:45:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:29.019 11:45:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:29.019 11:45:58 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:29.019 11:45:58 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:22:29.019 11:45:58 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:29.019 11:45:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:29.019 11:45:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:29.019 11:45:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:29.019 11:45:58 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:22:29.019 192.168.100.9' 00:22:29.019 11:45:58 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:22:29.019 192.168.100.9' 00:22:29.019 11:45:58 -- nvmf/common.sh@445 -- # head -n 1 00:22:29.019 11:45:58 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:29.019 11:45:58 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:29.019 192.168.100.9' 00:22:29.019 11:45:58 -- nvmf/common.sh@446 -- # tail -n +2 00:22:29.019 11:45:58 -- nvmf/common.sh@446 -- # head -n 1 00:22:29.019 11:45:58 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:29.019 11:45:58 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:22:29.019 11:45:58 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:29.019 11:45:58 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:22:29.019 11:45:58 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:22:29.019 11:45:58 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:22:29.019 11:45:58 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:29.019 11:45:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:29.019 11:45:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:29.019 11:45:58 -- common/autotest_common.sh@10 -- # set +x 00:22:29.019 11:45:58 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:29.019 11:45:58 -- nvmf/common.sh@469 -- # nvmfpid=2425958 00:22:29.019 11:45:58 -- nvmf/common.sh@470 -- # waitforlisten 2425958 00:22:29.019 11:45:58 -- common/autotest_common.sh@819 -- # '[' -z 2425958 ']' 00:22:29.019 11:45:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.019 11:45:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:29.019 11:45:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.019 11:45:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:29.019 11:45:58 -- common/autotest_common.sh@10 -- # set +x 00:22:29.019 [2024-07-21 11:45:58.380465] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:29.019 [2024-07-21 11:45:58.380514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.019 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.277 [2024-07-21 11:45:58.465185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:29.277 [2024-07-21 11:45:58.504185] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:29.277 [2024-07-21 11:45:58.504291] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.277 [2024-07-21 11:45:58.504301] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.277 [2024-07-21 11:45:58.504310] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.277 [2024-07-21 11:45:58.504355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.277 [2024-07-21 11:45:58.504464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.277 [2024-07-21 11:45:58.504485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:29.277 [2024-07-21 11:45:58.504487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.841 11:45:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:29.841 11:45:59 -- common/autotest_common.sh@852 -- # return 0 00:22:29.841 11:45:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:29.841 11:45:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:29.841 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.841 11:45:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.841 11:45:59 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:29.841 11:45:59 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:29.841 11:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.841 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.841 Malloc0 00:22:29.841 11:45:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.841 11:45:59 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:29.841 11:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.841 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.841 Delay0 00:22:29.841 11:45:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.098 11:45:59 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:30.098 11:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:30.098 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:30.098 [2024-07-21 11:45:59.292419] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11d1580/0x130a080) succeed. 00:22:30.098 [2024-07-21 11:45:59.303606] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12826f0/0x11e9f80) succeed. 00:22:30.098 11:45:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.098 11:45:59 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:30.098 11:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:30.098 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:30.098 11:45:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.098 11:45:59 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:30.098 11:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:30.098 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:30.098 11:45:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.098 11:45:59 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:30.098 11:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:30.098 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:30.098 [2024-07-21 11:45:59.449178] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:30.098 11:45:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.098 11:45:59 -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:31.031 11:46:00 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:31.031 11:46:00 -- common/autotest_common.sh@1177 -- # local i=0 00:22:31.031 11:46:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:31.031 11:46:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:31.031 11:46:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:33.560 11:46:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:33.560 11:46:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:33.560 11:46:02 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:22:33.560 11:46:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:33.560 11:46:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:33.560 11:46:02 -- common/autotest_common.sh@1187 -- # return 0 00:22:33.560 11:46:02 -- target/initiator_timeout.sh@35 -- # fio_pid=2426538 00:22:33.560 11:46:02 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:33.560 11:46:02 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:33.560 [global] 00:22:33.560 thread=1 00:22:33.560 invalidate=1 00:22:33.560 rw=write 00:22:33.560 time_based=1 00:22:33.560 runtime=60 00:22:33.560 ioengine=libaio 00:22:33.560 direct=1 00:22:33.560 bs=4096 00:22:33.560 iodepth=1 00:22:33.560 norandommap=0 00:22:33.560 numjobs=1 00:22:33.560 00:22:33.560 verify_dump=1 00:22:33.560 verify_backlog=512 00:22:33.560 verify_state_save=0 00:22:33.560 do_verify=1 00:22:33.560 verify=crc32c-intel 00:22:33.560 [job0] 00:22:33.560 filename=/dev/nvme0n1 00:22:33.560 Could not set queue depth (nvme0n1) 00:22:33.560 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:33.560 fio-3.35 00:22:33.560 Starting 1 thread 00:22:36.088 11:46:05 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:36.088 11:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:36.088 11:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:36.088 true 00:22:36.088 11:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:36.088 11:46:05 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:36.088 11:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:36.088 11:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:36.088 true 00:22:36.088 11:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:36.088 11:46:05 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:36.088 11:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:36.088 11:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:36.088 true 00:22:36.088 11:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:36.088 11:46:05 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:36.088 11:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:36.088 11:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:36.088 true 00:22:36.088 11:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:36.088 11:46:05 -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:39.360 11:46:08 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:39.360 11:46:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.360 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:22:39.360 true 00:22:39.360 11:46:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.360 11:46:08 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:39.360 11:46:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.360 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:22:39.360 true 00:22:39.360 11:46:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.360 11:46:08 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:39.360 11:46:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.360 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:22:39.360 true 00:22:39.360 11:46:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.360 11:46:08 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:39.360 11:46:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.360 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:22:39.360 true 00:22:39.360 11:46:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.360 11:46:08 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:39.360 11:46:08 -- target/initiator_timeout.sh@54 -- # wait 2426538 00:23:35.593 00:23:35.593 job0: (groupid=0, jobs=1): err= 0: pid=2426786: Sun Jul 21 11:47:02 2024 00:23:35.593 read: IOPS=1254, BW=5017KiB/s (5137kB/s)(294MiB/60000msec) 00:23:35.593 slat (usec): min=8, max=11792, avg= 9.40, stdev=42.97 00:23:35.593 clat (usec): min=76, max=42610k, avg=670.56, stdev=155330.21 00:23:35.593 lat (usec): min=95, max=42610k, avg=679.95, stdev=155330.22 00:23:35.593 clat percentiles (usec): 00:23:35.593 | 1.00th=[ 92], 5.00th=[ 95], 10.00th=[ 96], 20.00th=[ 99], 00:23:35.593 | 30.00th=[ 101], 40.00th=[ 102], 50.00th=[ 104], 60.00th=[ 106], 00:23:35.593 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 114], 95.00th=[ 117], 00:23:35.593 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 129], 99.95th=[ 135], 00:23:35.593 | 99.99th=[ 204] 00:23:35.593 write: IOPS=1254, BW=5018KiB/s (5138kB/s)(294MiB/60000msec); 0 zone resets 00:23:35.593 slat (usec): min=9, max=1036, avg=11.43, stdev= 4.37 00:23:35.593 clat (usec): min=72, max=303, avg=101.37, stdev= 6.85 00:23:35.593 lat (usec): min=94, max=1135, avg=112.79, stdev= 8.19 00:23:35.593 clat percentiles (usec): 00:23:35.593 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 93], 20.00th=[ 96], 00:23:35.593 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 101], 60.00th=[ 103], 00:23:35.593 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 111], 95.00th=[ 114], 00:23:35.593 | 99.00th=[ 119], 99.50th=[ 121], 99.90th=[ 128], 99.95th=[ 133], 00:23:35.593 | 99.99th=[ 198] 00:23:35.593 bw ( KiB/s): min= 4096, max=19408, per=100.00%, avg=16807.54, stdev=2543.52, samples=35 00:23:35.593 iops : min= 1024, max= 4854, avg=4201.89, stdev=635.92, samples=35 00:23:35.593 lat (usec) : 100=35.41%, 250=64.59%, 500=0.01% 00:23:35.593 lat (msec) : >=2000=0.01% 00:23:35.593 cpu : usr=1.75%, sys=3.41%, ctx=150520, majf=0, minf=144 00:23:35.593 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:35.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.593 issued rwts: total=75250,75264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.593 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:35.593 00:23:35.593 Run status group 0 (all jobs): 00:23:35.593 READ: bw=5017KiB/s (5137kB/s), 5017KiB/s-5017KiB/s (5137kB/s-5137kB/s), io=294MiB (308MB), run=60000-60000msec 00:23:35.593 WRITE: bw=5018KiB/s (5138kB/s), 5018KiB/s-5018KiB/s (5138kB/s-5138kB/s), io=294MiB (308MB), run=60000-60000msec 00:23:35.593 00:23:35.593 Disk stats (read/write): 00:23:35.593 nvme0n1: ios=75094/74902, merge=0/0, ticks=7137/7144, in_queue=14281, util=99.86% 00:23:35.593 11:47:02 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:35.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:35.593 11:47:03 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:35.593 11:47:03 -- common/autotest_common.sh@1198 -- # local i=0 00:23:35.593 11:47:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:35.593 11:47:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:35.593 11:47:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:35.593 11:47:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:35.593 11:47:03 -- common/autotest_common.sh@1210 -- # return 0 00:23:35.593 11:47:03 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:35.593 11:47:03 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:35.593 nvmf hotplug test: fio successful as expected 00:23:35.593 11:47:03 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:35.593 11:47:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:35.593 11:47:03 -- common/autotest_common.sh@10 -- # set +x 00:23:35.593 11:47:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:35.593 11:47:03 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:35.593 11:47:03 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:35.593 11:47:03 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:35.593 11:47:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:35.593 11:47:03 -- nvmf/common.sh@116 -- # sync 00:23:35.593 11:47:03 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:35.593 11:47:03 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:35.593 11:47:03 -- nvmf/common.sh@119 -- # set +e 00:23:35.593 11:47:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:35.593 11:47:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:35.593 rmmod nvme_rdma 00:23:35.593 rmmod nvme_fabrics 00:23:35.593 11:47:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:35.593 11:47:04 -- nvmf/common.sh@123 -- # set -e 00:23:35.593 11:47:04 -- nvmf/common.sh@124 -- # return 0 00:23:35.594 11:47:04 -- nvmf/common.sh@477 -- # '[' -n 2425958 ']' 00:23:35.594 11:47:04 -- nvmf/common.sh@478 -- # killprocess 2425958 00:23:35.594 11:47:04 -- common/autotest_common.sh@926 -- # '[' -z 2425958 ']' 00:23:35.594 11:47:04 -- common/autotest_common.sh@930 -- # kill -0 2425958 00:23:35.594 11:47:04 -- common/autotest_common.sh@931 -- # uname 00:23:35.594 11:47:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:35.594 11:47:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2425958 00:23:35.594 11:47:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:35.594 11:47:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:35.594 11:47:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2425958' 00:23:35.594 killing process with pid 2425958 00:23:35.594 11:47:04 -- common/autotest_common.sh@945 -- # kill 2425958 00:23:35.594 11:47:04 -- common/autotest_common.sh@950 -- # wait 2425958 00:23:35.594 11:47:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:35.594 11:47:04 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:35.594 00:23:35.594 real 1m14.375s 00:23:35.594 user 4m34.045s 00:23:35.594 sys 0m9.058s 00:23:35.594 11:47:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:35.594 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:23:35.594 ************************************ 00:23:35.594 END TEST nvmf_initiator_timeout 00:23:35.594 ************************************ 00:23:35.594 11:47:04 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:23:35.594 11:47:04 -- nvmf/nvmf.sh@70 -- # '[' rdma = tcp ']' 00:23:35.594 11:47:04 -- nvmf/nvmf.sh@76 -- # [[ '' -eq 1 ]] 00:23:35.594 11:47:04 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:35.594 11:47:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:35.594 11:47:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:35.594 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:23:35.594 ************************************ 00:23:35.594 START TEST nvmf_shutdown 00:23:35.594 ************************************ 00:23:35.594 11:47:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:35.594 * Looking for test storage... 00:23:35.594 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:35.594 11:47:04 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.594 11:47:04 -- nvmf/common.sh@7 -- # uname -s 00:23:35.594 11:47:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.594 11:47:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.594 11:47:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.594 11:47:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.594 11:47:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.594 11:47:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.594 11:47:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.594 11:47:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.594 11:47:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.594 11:47:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.594 11:47:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:35.594 11:47:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:35.594 11:47:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.594 11:47:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.594 11:47:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:35.594 11:47:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:35.594 11:47:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.594 11:47:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.594 11:47:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.594 11:47:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.594 11:47:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.594 11:47:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.594 11:47:04 -- paths/export.sh@5 -- # export PATH 00:23:35.594 11:47:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.594 11:47:04 -- nvmf/common.sh@46 -- # : 0 00:23:35.594 11:47:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:35.594 11:47:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:35.594 11:47:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:35.594 11:47:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.594 11:47:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.594 11:47:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:35.594 11:47:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:35.594 11:47:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:35.594 11:47:04 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:35.594 11:47:04 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:35.594 11:47:04 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:35.594 11:47:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:35.594 11:47:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:35.594 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:23:35.594 ************************************ 00:23:35.594 START TEST nvmf_shutdown_tc1 00:23:35.594 ************************************ 00:23:35.594 11:47:04 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:23:35.594 11:47:04 -- target/shutdown.sh@74 -- # starttarget 00:23:35.594 11:47:04 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:35.594 11:47:04 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:35.594 11:47:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.594 11:47:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:35.594 11:47:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:35.594 11:47:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:35.594 11:47:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.594 11:47:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.594 11:47:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.594 11:47:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:35.594 11:47:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:35.594 11:47:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:35.594 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:23:43.697 11:47:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:43.697 11:47:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:43.697 11:47:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:43.697 11:47:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:43.697 11:47:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:43.697 11:47:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:43.697 11:47:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:43.697 11:47:12 -- nvmf/common.sh@294 -- # net_devs=() 00:23:43.697 11:47:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:43.697 11:47:12 -- nvmf/common.sh@295 -- # e810=() 00:23:43.697 11:47:12 -- nvmf/common.sh@295 -- # local -ga e810 00:23:43.697 11:47:12 -- nvmf/common.sh@296 -- # x722=() 00:23:43.697 11:47:12 -- nvmf/common.sh@296 -- # local -ga x722 00:23:43.697 11:47:12 -- nvmf/common.sh@297 -- # mlx=() 00:23:43.697 11:47:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:43.697 11:47:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.697 11:47:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.697 11:47:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.697 11:47:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.697 11:47:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.697 11:47:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.697 11:47:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.697 11:47:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.697 11:47:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.697 11:47:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.697 11:47:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.697 11:47:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:43.697 11:47:12 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:43.697 11:47:12 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:43.697 11:47:12 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:43.697 11:47:12 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:43.697 11:47:12 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:43.697 11:47:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:43.697 11:47:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:43.697 11:47:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:43.697 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:43.697 11:47:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:43.697 11:47:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:43.698 11:47:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:43.698 11:47:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:43.698 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:43.698 11:47:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:43.698 11:47:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:43.698 11:47:12 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:43.698 11:47:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.698 11:47:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:43.698 11:47:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.698 11:47:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:43.698 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:43.698 11:47:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.698 11:47:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:43.698 11:47:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.698 11:47:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:43.698 11:47:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.698 11:47:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:43.698 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:43.698 11:47:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.698 11:47:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:43.698 11:47:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:43.698 11:47:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:43.698 11:47:12 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:43.698 11:47:12 -- nvmf/common.sh@57 -- # uname 00:23:43.698 11:47:12 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:43.698 11:47:12 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:43.698 11:47:12 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:43.698 11:47:12 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:43.698 11:47:12 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:43.698 11:47:12 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:43.698 11:47:12 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:43.698 11:47:12 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:43.698 11:47:12 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:43.698 11:47:12 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:43.698 11:47:12 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:43.698 11:47:12 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:43.698 11:47:12 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:43.698 11:47:12 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:43.698 11:47:12 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:43.698 11:47:12 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:43.698 11:47:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:43.698 11:47:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:43.698 11:47:12 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:43.698 11:47:12 -- nvmf/common.sh@104 -- # continue 2 00:23:43.698 11:47:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:43.698 11:47:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:43.698 11:47:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:43.698 11:47:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:43.698 11:47:12 -- nvmf/common.sh@104 -- # continue 2 00:23:43.698 11:47:12 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:43.698 11:47:12 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:43.698 11:47:12 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:43.698 11:47:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:43.698 11:47:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:43.698 11:47:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:43.698 11:47:12 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:43.698 11:47:12 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:43.698 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:43.698 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:43.698 altname enp217s0f0np0 00:23:43.698 altname ens818f0np0 00:23:43.698 inet 192.168.100.8/24 scope global mlx_0_0 00:23:43.698 valid_lft forever preferred_lft forever 00:23:43.698 11:47:12 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:43.698 11:47:12 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:43.698 11:47:12 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:43.698 11:47:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:43.698 11:47:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:43.698 11:47:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:43.698 11:47:12 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:43.698 11:47:12 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:43.698 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:43.698 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:43.698 altname enp217s0f1np1 00:23:43.698 altname ens818f1np1 00:23:43.698 inet 192.168.100.9/24 scope global mlx_0_1 00:23:43.698 valid_lft forever preferred_lft forever 00:23:43.698 11:47:12 -- nvmf/common.sh@410 -- # return 0 00:23:43.698 11:47:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:43.698 11:47:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:43.698 11:47:12 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:43.698 11:47:12 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:43.698 11:47:12 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:43.698 11:47:12 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:43.698 11:47:12 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:43.698 11:47:12 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:43.698 11:47:12 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:43.698 11:47:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:43.698 11:47:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:43.698 11:47:12 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:43.698 11:47:12 -- nvmf/common.sh@104 -- # continue 2 00:23:43.698 11:47:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:43.698 11:47:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:43.698 11:47:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:43.698 11:47:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:43.698 11:47:12 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:43.698 11:47:12 -- nvmf/common.sh@104 -- # continue 2 00:23:43.698 11:47:12 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:43.698 11:47:12 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:43.698 11:47:12 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:43.698 11:47:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:43.698 11:47:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:43.698 11:47:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:43.698 11:47:12 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:43.698 11:47:12 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:43.698 11:47:12 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:43.698 11:47:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:43.698 11:47:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:43.698 11:47:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:43.698 11:47:12 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:43.698 192.168.100.9' 00:23:43.698 11:47:12 -- nvmf/common.sh@445 -- # head -n 1 00:23:43.698 11:47:12 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:43.698 192.168.100.9' 00:23:43.698 11:47:12 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:43.698 11:47:12 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:43.698 192.168.100.9' 00:23:43.698 11:47:12 -- nvmf/common.sh@446 -- # tail -n +2 00:23:43.698 11:47:12 -- nvmf/common.sh@446 -- # head -n 1 00:23:43.698 11:47:13 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:43.698 11:47:13 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:43.698 11:47:13 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:43.698 11:47:13 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:43.698 11:47:13 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:43.698 11:47:13 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:43.698 11:47:13 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:43.698 11:47:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:43.698 11:47:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:43.698 11:47:13 -- common/autotest_common.sh@10 -- # set +x 00:23:43.698 11:47:13 -- nvmf/common.sh@469 -- # nvmfpid=2441192 00:23:43.698 11:47:13 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:43.698 11:47:13 -- nvmf/common.sh@470 -- # waitforlisten 2441192 00:23:43.698 11:47:13 -- common/autotest_common.sh@819 -- # '[' -z 2441192 ']' 00:23:43.698 11:47:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.698 11:47:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:43.698 11:47:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.698 11:47:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:43.698 11:47:13 -- common/autotest_common.sh@10 -- # set +x 00:23:43.698 [2024-07-21 11:47:13.082283] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:43.698 [2024-07-21 11:47:13.082336] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.955 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.955 [2024-07-21 11:47:13.166611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:43.955 [2024-07-21 11:47:13.204959] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:43.955 [2024-07-21 11:47:13.205064] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.955 [2024-07-21 11:47:13.205074] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.955 [2024-07-21 11:47:13.205083] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.955 [2024-07-21 11:47:13.205189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.955 [2024-07-21 11:47:13.205218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:43.955 [2024-07-21 11:47:13.205239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:43.955 [2024-07-21 11:47:13.205240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.516 11:47:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:44.516 11:47:13 -- common/autotest_common.sh@852 -- # return 0 00:23:44.516 11:47:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:44.516 11:47:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:44.516 11:47:13 -- common/autotest_common.sh@10 -- # set +x 00:23:44.516 11:47:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.516 11:47:13 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:44.516 11:47:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.516 11:47:13 -- common/autotest_common.sh@10 -- # set +x 00:23:44.773 [2024-07-21 11:47:13.962236] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf337a0/0xf37c90) succeed. 00:23:44.773 [2024-07-21 11:47:13.972691] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf34d90/0xf79320) succeed. 00:23:44.773 11:47:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:44.773 11:47:14 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:44.773 11:47:14 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:44.773 11:47:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:44.773 11:47:14 -- common/autotest_common.sh@10 -- # set +x 00:23:44.773 11:47:14 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:44.773 11:47:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:44.773 11:47:14 -- target/shutdown.sh@28 -- # cat 00:23:44.773 11:47:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:44.773 11:47:14 -- target/shutdown.sh@28 -- # cat 00:23:44.773 11:47:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:44.773 11:47:14 -- target/shutdown.sh@28 -- # cat 00:23:44.773 11:47:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:44.773 11:47:14 -- target/shutdown.sh@28 -- # cat 00:23:44.773 11:47:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:44.773 11:47:14 -- target/shutdown.sh@28 -- # cat 00:23:44.773 11:47:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:44.773 11:47:14 -- target/shutdown.sh@28 -- # cat 00:23:44.773 11:47:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:44.773 11:47:14 -- target/shutdown.sh@28 -- # cat 00:23:44.773 11:47:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:44.773 11:47:14 -- target/shutdown.sh@28 -- # cat 00:23:44.773 11:47:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:44.773 11:47:14 -- target/shutdown.sh@28 -- # cat 00:23:44.773 11:47:14 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:44.773 11:47:14 -- target/shutdown.sh@28 -- # cat 00:23:44.773 11:47:14 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:44.773 11:47:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.773 11:47:14 -- common/autotest_common.sh@10 -- # set +x 00:23:44.773 Malloc1 00:23:45.029 [2024-07-21 11:47:14.199064] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:45.029 Malloc2 00:23:45.029 Malloc3 00:23:45.029 Malloc4 00:23:45.029 Malloc5 00:23:45.029 Malloc6 00:23:45.029 Malloc7 00:23:45.287 Malloc8 00:23:45.287 Malloc9 00:23:45.287 Malloc10 00:23:45.287 11:47:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.287 11:47:14 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:45.287 11:47:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:45.287 11:47:14 -- common/autotest_common.sh@10 -- # set +x 00:23:45.287 11:47:14 -- target/shutdown.sh@78 -- # perfpid=2441504 00:23:45.287 11:47:14 -- target/shutdown.sh@79 -- # waitforlisten 2441504 /var/tmp/bdevperf.sock 00:23:45.287 11:47:14 -- common/autotest_common.sh@819 -- # '[' -z 2441504 ']' 00:23:45.287 11:47:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.287 11:47:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:45.287 11:47:14 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:45.287 11:47:14 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:45.287 11:47:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.287 11:47:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:45.287 11:47:14 -- nvmf/common.sh@520 -- # config=() 00:23:45.287 11:47:14 -- common/autotest_common.sh@10 -- # set +x 00:23:45.287 11:47:14 -- nvmf/common.sh@520 -- # local subsystem config 00:23:45.287 11:47:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:45.287 11:47:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:45.287 { 00:23:45.287 "params": { 00:23:45.287 "name": "Nvme$subsystem", 00:23:45.287 "trtype": "$TEST_TRANSPORT", 00:23:45.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.287 "adrfam": "ipv4", 00:23:45.287 "trsvcid": "$NVMF_PORT", 00:23:45.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.287 "hdgst": ${hdgst:-false}, 00:23:45.287 "ddgst": ${ddgst:-false} 00:23:45.287 }, 00:23:45.287 "method": "bdev_nvme_attach_controller" 00:23:45.287 } 00:23:45.287 EOF 00:23:45.287 )") 00:23:45.287 11:47:14 -- nvmf/common.sh@542 -- # cat 00:23:45.287 11:47:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:45.287 11:47:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:45.287 { 00:23:45.287 "params": { 00:23:45.287 "name": "Nvme$subsystem", 00:23:45.287 "trtype": "$TEST_TRANSPORT", 00:23:45.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.287 "adrfam": "ipv4", 00:23:45.287 "trsvcid": "$NVMF_PORT", 00:23:45.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.287 "hdgst": ${hdgst:-false}, 00:23:45.287 "ddgst": ${ddgst:-false} 00:23:45.287 }, 00:23:45.287 "method": "bdev_nvme_attach_controller" 00:23:45.287 } 00:23:45.287 EOF 00:23:45.287 )") 00:23:45.287 11:47:14 -- nvmf/common.sh@542 -- # cat 00:23:45.287 11:47:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:45.287 11:47:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:45.287 { 00:23:45.287 "params": { 00:23:45.287 "name": "Nvme$subsystem", 00:23:45.287 "trtype": "$TEST_TRANSPORT", 00:23:45.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.287 "adrfam": "ipv4", 00:23:45.287 "trsvcid": "$NVMF_PORT", 00:23:45.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.287 "hdgst": ${hdgst:-false}, 00:23:45.287 "ddgst": ${ddgst:-false} 00:23:45.287 }, 00:23:45.287 "method": "bdev_nvme_attach_controller" 00:23:45.287 } 00:23:45.287 EOF 00:23:45.287 )") 00:23:45.287 11:47:14 -- nvmf/common.sh@542 -- # cat 00:23:45.287 11:47:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:45.287 11:47:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:45.287 { 00:23:45.287 "params": { 00:23:45.287 "name": "Nvme$subsystem", 00:23:45.287 "trtype": "$TEST_TRANSPORT", 00:23:45.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.287 "adrfam": "ipv4", 00:23:45.287 "trsvcid": "$NVMF_PORT", 00:23:45.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.287 "hdgst": ${hdgst:-false}, 00:23:45.287 "ddgst": ${ddgst:-false} 00:23:45.287 }, 00:23:45.287 "method": "bdev_nvme_attach_controller" 00:23:45.287 } 00:23:45.287 EOF 00:23:45.287 )") 00:23:45.287 11:47:14 -- nvmf/common.sh@542 -- # cat 00:23:45.287 11:47:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:45.287 11:47:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:45.287 { 00:23:45.287 "params": { 00:23:45.287 "name": "Nvme$subsystem", 00:23:45.287 "trtype": "$TEST_TRANSPORT", 00:23:45.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.287 "adrfam": "ipv4", 00:23:45.287 "trsvcid": "$NVMF_PORT", 00:23:45.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.287 "hdgst": ${hdgst:-false}, 00:23:45.287 "ddgst": ${ddgst:-false} 00:23:45.287 }, 00:23:45.287 "method": "bdev_nvme_attach_controller" 00:23:45.287 } 00:23:45.287 EOF 00:23:45.287 )") 00:23:45.287 11:47:14 -- nvmf/common.sh@542 -- # cat 00:23:45.287 [2024-07-21 11:47:14.691239] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:45.287 [2024-07-21 11:47:14.691289] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:45.287 11:47:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:45.287 11:47:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:45.287 { 00:23:45.287 "params": { 00:23:45.287 "name": "Nvme$subsystem", 00:23:45.287 "trtype": "$TEST_TRANSPORT", 00:23:45.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.287 "adrfam": "ipv4", 00:23:45.287 "trsvcid": "$NVMF_PORT", 00:23:45.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.287 "hdgst": ${hdgst:-false}, 00:23:45.287 "ddgst": ${ddgst:-false} 00:23:45.287 }, 00:23:45.287 "method": "bdev_nvme_attach_controller" 00:23:45.287 } 00:23:45.287 EOF 00:23:45.287 )") 00:23:45.287 11:47:14 -- nvmf/common.sh@542 -- # cat 00:23:45.287 11:47:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:45.287 11:47:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:45.287 { 00:23:45.287 "params": { 00:23:45.287 "name": "Nvme$subsystem", 00:23:45.287 "trtype": "$TEST_TRANSPORT", 00:23:45.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.287 "adrfam": "ipv4", 00:23:45.287 "trsvcid": "$NVMF_PORT", 00:23:45.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.287 "hdgst": ${hdgst:-false}, 00:23:45.287 "ddgst": ${ddgst:-false} 00:23:45.287 }, 00:23:45.287 "method": "bdev_nvme_attach_controller" 00:23:45.287 } 00:23:45.287 EOF 00:23:45.287 )") 00:23:45.287 11:47:14 -- nvmf/common.sh@542 -- # cat 00:23:45.287 11:47:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:45.544 11:47:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:45.544 { 00:23:45.544 "params": { 00:23:45.544 "name": "Nvme$subsystem", 00:23:45.544 "trtype": "$TEST_TRANSPORT", 00:23:45.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.544 "adrfam": "ipv4", 00:23:45.544 "trsvcid": "$NVMF_PORT", 00:23:45.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.544 "hdgst": ${hdgst:-false}, 00:23:45.544 "ddgst": ${ddgst:-false} 00:23:45.544 }, 00:23:45.544 "method": "bdev_nvme_attach_controller" 00:23:45.544 } 00:23:45.544 EOF 00:23:45.544 )") 00:23:45.544 11:47:14 -- nvmf/common.sh@542 -- # cat 00:23:45.544 11:47:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:45.544 11:47:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:45.544 { 00:23:45.544 "params": { 00:23:45.544 "name": "Nvme$subsystem", 00:23:45.544 "trtype": "$TEST_TRANSPORT", 00:23:45.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.544 "adrfam": "ipv4", 00:23:45.544 "trsvcid": "$NVMF_PORT", 00:23:45.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.544 "hdgst": ${hdgst:-false}, 00:23:45.544 "ddgst": ${ddgst:-false} 00:23:45.544 }, 00:23:45.544 "method": "bdev_nvme_attach_controller" 00:23:45.544 } 00:23:45.544 EOF 00:23:45.544 )") 00:23:45.544 11:47:14 -- nvmf/common.sh@542 -- # cat 00:23:45.544 11:47:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:45.545 11:47:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:45.545 { 00:23:45.545 "params": { 00:23:45.545 "name": "Nvme$subsystem", 00:23:45.545 "trtype": "$TEST_TRANSPORT", 00:23:45.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.545 "adrfam": "ipv4", 00:23:45.545 "trsvcid": "$NVMF_PORT", 00:23:45.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.545 "hdgst": ${hdgst:-false}, 00:23:45.545 "ddgst": ${ddgst:-false} 00:23:45.545 }, 00:23:45.545 "method": "bdev_nvme_attach_controller" 00:23:45.545 } 00:23:45.545 EOF 00:23:45.545 )") 00:23:45.545 11:47:14 -- nvmf/common.sh@542 -- # cat 00:23:45.545 11:47:14 -- nvmf/common.sh@544 -- # jq . 00:23:45.545 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.545 11:47:14 -- nvmf/common.sh@545 -- # IFS=, 00:23:45.545 11:47:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:45.545 "params": { 00:23:45.545 "name": "Nvme1", 00:23:45.545 "trtype": "rdma", 00:23:45.545 "traddr": "192.168.100.8", 00:23:45.545 "adrfam": "ipv4", 00:23:45.545 "trsvcid": "4420", 00:23:45.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:45.545 "hdgst": false, 00:23:45.545 "ddgst": false 00:23:45.545 }, 00:23:45.545 "method": "bdev_nvme_attach_controller" 00:23:45.545 },{ 00:23:45.545 "params": { 00:23:45.545 "name": "Nvme2", 00:23:45.545 "trtype": "rdma", 00:23:45.545 "traddr": "192.168.100.8", 00:23:45.545 "adrfam": "ipv4", 00:23:45.545 "trsvcid": "4420", 00:23:45.545 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:45.545 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:45.545 "hdgst": false, 00:23:45.545 "ddgst": false 00:23:45.545 }, 00:23:45.545 "method": "bdev_nvme_attach_controller" 00:23:45.545 },{ 00:23:45.545 "params": { 00:23:45.545 "name": "Nvme3", 00:23:45.545 "trtype": "rdma", 00:23:45.545 "traddr": "192.168.100.8", 00:23:45.545 "adrfam": "ipv4", 00:23:45.545 "trsvcid": "4420", 00:23:45.545 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:45.545 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:45.545 "hdgst": false, 00:23:45.545 "ddgst": false 00:23:45.545 }, 00:23:45.545 "method": "bdev_nvme_attach_controller" 00:23:45.545 },{ 00:23:45.545 "params": { 00:23:45.545 "name": "Nvme4", 00:23:45.545 "trtype": "rdma", 00:23:45.545 "traddr": "192.168.100.8", 00:23:45.545 "adrfam": "ipv4", 00:23:45.545 "trsvcid": "4420", 00:23:45.545 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:45.545 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:45.545 "hdgst": false, 00:23:45.545 "ddgst": false 00:23:45.545 }, 00:23:45.545 "method": "bdev_nvme_attach_controller" 00:23:45.545 },{ 00:23:45.545 "params": { 00:23:45.545 "name": "Nvme5", 00:23:45.545 "trtype": "rdma", 00:23:45.545 "traddr": "192.168.100.8", 00:23:45.545 "adrfam": "ipv4", 00:23:45.545 "trsvcid": "4420", 00:23:45.545 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:45.545 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:45.545 "hdgst": false, 00:23:45.545 "ddgst": false 00:23:45.545 }, 00:23:45.545 "method": "bdev_nvme_attach_controller" 00:23:45.545 },{ 00:23:45.545 "params": { 00:23:45.545 "name": "Nvme6", 00:23:45.545 "trtype": "rdma", 00:23:45.545 "traddr": "192.168.100.8", 00:23:45.545 "adrfam": "ipv4", 00:23:45.545 "trsvcid": "4420", 00:23:45.545 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:45.545 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:45.545 "hdgst": false, 00:23:45.545 "ddgst": false 00:23:45.545 }, 00:23:45.545 "method": "bdev_nvme_attach_controller" 00:23:45.545 },{ 00:23:45.545 "params": { 00:23:45.545 "name": "Nvme7", 00:23:45.545 "trtype": "rdma", 00:23:45.545 "traddr": "192.168.100.8", 00:23:45.545 "adrfam": "ipv4", 00:23:45.545 "trsvcid": "4420", 00:23:45.545 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:45.545 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:45.545 "hdgst": false, 00:23:45.545 "ddgst": false 00:23:45.545 }, 00:23:45.545 "method": "bdev_nvme_attach_controller" 00:23:45.545 },{ 00:23:45.545 "params": { 00:23:45.545 "name": "Nvme8", 00:23:45.545 "trtype": "rdma", 00:23:45.545 "traddr": "192.168.100.8", 00:23:45.545 "adrfam": "ipv4", 00:23:45.545 "trsvcid": "4420", 00:23:45.545 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:45.545 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:45.545 "hdgst": false, 00:23:45.545 "ddgst": false 00:23:45.545 }, 00:23:45.545 "method": "bdev_nvme_attach_controller" 00:23:45.545 },{ 00:23:45.545 "params": { 00:23:45.545 "name": "Nvme9", 00:23:45.545 "trtype": "rdma", 00:23:45.545 "traddr": "192.168.100.8", 00:23:45.545 "adrfam": "ipv4", 00:23:45.545 "trsvcid": "4420", 00:23:45.545 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:45.545 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:45.545 "hdgst": false, 00:23:45.545 "ddgst": false 00:23:45.545 }, 00:23:45.545 "method": "bdev_nvme_attach_controller" 00:23:45.545 },{ 00:23:45.545 "params": { 00:23:45.545 "name": "Nvme10", 00:23:45.545 "trtype": "rdma", 00:23:45.545 "traddr": "192.168.100.8", 00:23:45.545 "adrfam": "ipv4", 00:23:45.545 "trsvcid": "4420", 00:23:45.545 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:45.545 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:45.545 "hdgst": false, 00:23:45.545 "ddgst": false 00:23:45.545 }, 00:23:45.545 "method": "bdev_nvme_attach_controller" 00:23:45.545 }' 00:23:45.545 [2024-07-21 11:47:14.778818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.545 [2024-07-21 11:47:14.815114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.911 11:47:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:46.911 11:47:16 -- common/autotest_common.sh@852 -- # return 0 00:23:46.911 11:47:16 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:46.911 11:47:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.911 11:47:16 -- common/autotest_common.sh@10 -- # set +x 00:23:46.911 11:47:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.911 11:47:16 -- target/shutdown.sh@83 -- # kill -9 2441504 00:23:46.911 11:47:16 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:46.911 11:47:16 -- target/shutdown.sh@87 -- # sleep 1 00:23:47.839 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2441504 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:47.839 11:47:17 -- target/shutdown.sh@88 -- # kill -0 2441192 00:23:47.839 11:47:17 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:47.839 11:47:17 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:47.839 11:47:17 -- nvmf/common.sh@520 -- # config=() 00:23:47.839 11:47:17 -- nvmf/common.sh@520 -- # local subsystem config 00:23:47.839 11:47:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:47.839 { 00:23:47.839 "params": { 00:23:47.839 "name": "Nvme$subsystem", 00:23:47.839 "trtype": "$TEST_TRANSPORT", 00:23:47.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.839 "adrfam": "ipv4", 00:23:47.839 "trsvcid": "$NVMF_PORT", 00:23:47.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.839 "hdgst": ${hdgst:-false}, 00:23:47.839 "ddgst": ${ddgst:-false} 00:23:47.839 }, 00:23:47.839 "method": "bdev_nvme_attach_controller" 00:23:47.839 } 00:23:47.839 EOF 00:23:47.839 )") 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # cat 00:23:47.839 11:47:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:47.839 { 00:23:47.839 "params": { 00:23:47.839 "name": "Nvme$subsystem", 00:23:47.839 "trtype": "$TEST_TRANSPORT", 00:23:47.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.839 "adrfam": "ipv4", 00:23:47.839 "trsvcid": "$NVMF_PORT", 00:23:47.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.839 "hdgst": ${hdgst:-false}, 00:23:47.839 "ddgst": ${ddgst:-false} 00:23:47.839 }, 00:23:47.839 "method": "bdev_nvme_attach_controller" 00:23:47.839 } 00:23:47.839 EOF 00:23:47.839 )") 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # cat 00:23:47.839 11:47:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:47.839 { 00:23:47.839 "params": { 00:23:47.839 "name": "Nvme$subsystem", 00:23:47.839 "trtype": "$TEST_TRANSPORT", 00:23:47.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.839 "adrfam": "ipv4", 00:23:47.839 "trsvcid": "$NVMF_PORT", 00:23:47.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.839 "hdgst": ${hdgst:-false}, 00:23:47.839 "ddgst": ${ddgst:-false} 00:23:47.839 }, 00:23:47.839 "method": "bdev_nvme_attach_controller" 00:23:47.839 } 00:23:47.839 EOF 00:23:47.839 )") 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # cat 00:23:47.839 11:47:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:47.839 { 00:23:47.839 "params": { 00:23:47.839 "name": "Nvme$subsystem", 00:23:47.839 "trtype": "$TEST_TRANSPORT", 00:23:47.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.839 "adrfam": "ipv4", 00:23:47.839 "trsvcid": "$NVMF_PORT", 00:23:47.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.839 "hdgst": ${hdgst:-false}, 00:23:47.839 "ddgst": ${ddgst:-false} 00:23:47.839 }, 00:23:47.839 "method": "bdev_nvme_attach_controller" 00:23:47.839 } 00:23:47.839 EOF 00:23:47.839 )") 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # cat 00:23:47.839 11:47:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:47.839 { 00:23:47.839 "params": { 00:23:47.839 "name": "Nvme$subsystem", 00:23:47.839 "trtype": "$TEST_TRANSPORT", 00:23:47.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.839 "adrfam": "ipv4", 00:23:47.839 "trsvcid": "$NVMF_PORT", 00:23:47.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.839 "hdgst": ${hdgst:-false}, 00:23:47.839 "ddgst": ${ddgst:-false} 00:23:47.839 }, 00:23:47.839 "method": "bdev_nvme_attach_controller" 00:23:47.839 } 00:23:47.839 EOF 00:23:47.839 )") 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # cat 00:23:47.839 [2024-07-21 11:47:17.228863] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:47.839 [2024-07-21 11:47:17.228912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2441829 ] 00:23:47.839 11:47:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:47.839 { 00:23:47.839 "params": { 00:23:47.839 "name": "Nvme$subsystem", 00:23:47.839 "trtype": "$TEST_TRANSPORT", 00:23:47.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.839 "adrfam": "ipv4", 00:23:47.839 "trsvcid": "$NVMF_PORT", 00:23:47.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.839 "hdgst": ${hdgst:-false}, 00:23:47.839 "ddgst": ${ddgst:-false} 00:23:47.839 }, 00:23:47.839 "method": "bdev_nvme_attach_controller" 00:23:47.839 } 00:23:47.839 EOF 00:23:47.839 )") 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # cat 00:23:47.839 11:47:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:47.839 { 00:23:47.839 "params": { 00:23:47.839 "name": "Nvme$subsystem", 00:23:47.839 "trtype": "$TEST_TRANSPORT", 00:23:47.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.839 "adrfam": "ipv4", 00:23:47.839 "trsvcid": "$NVMF_PORT", 00:23:47.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.839 "hdgst": ${hdgst:-false}, 00:23:47.839 "ddgst": ${ddgst:-false} 00:23:47.839 }, 00:23:47.839 "method": "bdev_nvme_attach_controller" 00:23:47.839 } 00:23:47.839 EOF 00:23:47.839 )") 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # cat 00:23:47.839 11:47:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:47.839 { 00:23:47.839 "params": { 00:23:47.839 "name": "Nvme$subsystem", 00:23:47.839 "trtype": "$TEST_TRANSPORT", 00:23:47.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.839 "adrfam": "ipv4", 00:23:47.839 "trsvcid": "$NVMF_PORT", 00:23:47.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.839 "hdgst": ${hdgst:-false}, 00:23:47.839 "ddgst": ${ddgst:-false} 00:23:47.839 }, 00:23:47.839 "method": "bdev_nvme_attach_controller" 00:23:47.839 } 00:23:47.839 EOF 00:23:47.839 )") 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # cat 00:23:47.839 11:47:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:47.839 { 00:23:47.839 "params": { 00:23:47.839 "name": "Nvme$subsystem", 00:23:47.839 "trtype": "$TEST_TRANSPORT", 00:23:47.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.839 "adrfam": "ipv4", 00:23:47.839 "trsvcid": "$NVMF_PORT", 00:23:47.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.839 "hdgst": ${hdgst:-false}, 00:23:47.839 "ddgst": ${ddgst:-false} 00:23:47.839 }, 00:23:47.839 "method": "bdev_nvme_attach_controller" 00:23:47.839 } 00:23:47.839 EOF 00:23:47.839 )") 00:23:47.839 11:47:17 -- nvmf/common.sh@542 -- # cat 00:23:48.095 11:47:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:48.095 11:47:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:48.095 { 00:23:48.095 "params": { 00:23:48.095 "name": "Nvme$subsystem", 00:23:48.095 "trtype": "$TEST_TRANSPORT", 00:23:48.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.095 "adrfam": "ipv4", 00:23:48.095 "trsvcid": "$NVMF_PORT", 00:23:48.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.095 "hdgst": ${hdgst:-false}, 00:23:48.095 "ddgst": ${ddgst:-false} 00:23:48.095 }, 00:23:48.095 "method": "bdev_nvme_attach_controller" 00:23:48.095 } 00:23:48.095 EOF 00:23:48.095 )") 00:23:48.095 11:47:17 -- nvmf/common.sh@542 -- # cat 00:23:48.095 11:47:17 -- nvmf/common.sh@544 -- # jq . 00:23:48.095 11:47:17 -- nvmf/common.sh@545 -- # IFS=, 00:23:48.095 11:47:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:48.095 "params": { 00:23:48.095 "name": "Nvme1", 00:23:48.095 "trtype": "rdma", 00:23:48.095 "traddr": "192.168.100.8", 00:23:48.095 "adrfam": "ipv4", 00:23:48.095 "trsvcid": "4420", 00:23:48.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.095 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:48.095 "hdgst": false, 00:23:48.095 "ddgst": false 00:23:48.095 }, 00:23:48.095 "method": "bdev_nvme_attach_controller" 00:23:48.095 },{ 00:23:48.095 "params": { 00:23:48.095 "name": "Nvme2", 00:23:48.095 "trtype": "rdma", 00:23:48.095 "traddr": "192.168.100.8", 00:23:48.095 "adrfam": "ipv4", 00:23:48.095 "trsvcid": "4420", 00:23:48.095 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:48.095 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:48.095 "hdgst": false, 00:23:48.095 "ddgst": false 00:23:48.095 }, 00:23:48.095 "method": "bdev_nvme_attach_controller" 00:23:48.095 },{ 00:23:48.095 "params": { 00:23:48.095 "name": "Nvme3", 00:23:48.095 "trtype": "rdma", 00:23:48.095 "traddr": "192.168.100.8", 00:23:48.095 "adrfam": "ipv4", 00:23:48.096 "trsvcid": "4420", 00:23:48.096 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:48.096 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:48.096 "hdgst": false, 00:23:48.096 "ddgst": false 00:23:48.096 }, 00:23:48.096 "method": "bdev_nvme_attach_controller" 00:23:48.096 },{ 00:23:48.096 "params": { 00:23:48.096 "name": "Nvme4", 00:23:48.096 "trtype": "rdma", 00:23:48.096 "traddr": "192.168.100.8", 00:23:48.096 "adrfam": "ipv4", 00:23:48.096 "trsvcid": "4420", 00:23:48.096 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:48.096 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:48.096 "hdgst": false, 00:23:48.096 "ddgst": false 00:23:48.096 }, 00:23:48.096 "method": "bdev_nvme_attach_controller" 00:23:48.096 },{ 00:23:48.096 "params": { 00:23:48.096 "name": "Nvme5", 00:23:48.096 "trtype": "rdma", 00:23:48.096 "traddr": "192.168.100.8", 00:23:48.096 "adrfam": "ipv4", 00:23:48.096 "trsvcid": "4420", 00:23:48.096 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:48.096 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:48.096 "hdgst": false, 00:23:48.096 "ddgst": false 00:23:48.096 }, 00:23:48.096 "method": "bdev_nvme_attach_controller" 00:23:48.096 },{ 00:23:48.096 "params": { 00:23:48.096 "name": "Nvme6", 00:23:48.096 "trtype": "rdma", 00:23:48.096 "traddr": "192.168.100.8", 00:23:48.096 "adrfam": "ipv4", 00:23:48.096 "trsvcid": "4420", 00:23:48.096 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:48.096 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:48.096 "hdgst": false, 00:23:48.096 "ddgst": false 00:23:48.096 }, 00:23:48.096 "method": "bdev_nvme_attach_controller" 00:23:48.096 },{ 00:23:48.096 "params": { 00:23:48.096 "name": "Nvme7", 00:23:48.096 "trtype": "rdma", 00:23:48.096 "traddr": "192.168.100.8", 00:23:48.096 "adrfam": "ipv4", 00:23:48.096 "trsvcid": "4420", 00:23:48.096 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:48.096 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:48.096 "hdgst": false, 00:23:48.096 "ddgst": false 00:23:48.096 }, 00:23:48.096 "method": "bdev_nvme_attach_controller" 00:23:48.096 },{ 00:23:48.096 "params": { 00:23:48.096 "name": "Nvme8", 00:23:48.096 "trtype": "rdma", 00:23:48.096 "traddr": "192.168.100.8", 00:23:48.096 "adrfam": "ipv4", 00:23:48.096 "trsvcid": "4420", 00:23:48.096 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:48.096 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:48.096 "hdgst": false, 00:23:48.096 "ddgst": false 00:23:48.096 }, 00:23:48.096 "method": "bdev_nvme_attach_controller" 00:23:48.096 },{ 00:23:48.096 "params": { 00:23:48.096 "name": "Nvme9", 00:23:48.096 "trtype": "rdma", 00:23:48.096 "traddr": "192.168.100.8", 00:23:48.096 "adrfam": "ipv4", 00:23:48.096 "trsvcid": "4420", 00:23:48.096 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:48.096 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:48.096 "hdgst": false, 00:23:48.096 "ddgst": false 00:23:48.096 }, 00:23:48.096 "method": "bdev_nvme_attach_controller" 00:23:48.096 },{ 00:23:48.096 "params": { 00:23:48.096 "name": "Nvme10", 00:23:48.096 "trtype": "rdma", 00:23:48.096 "traddr": "192.168.100.8", 00:23:48.096 "adrfam": "ipv4", 00:23:48.096 "trsvcid": "4420", 00:23:48.096 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:48.096 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:48.096 "hdgst": false, 00:23:48.096 "ddgst": false 00:23:48.096 }, 00:23:48.096 "method": "bdev_nvme_attach_controller" 00:23:48.096 }' 00:23:48.096 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.096 [2024-07-21 11:47:17.318748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.096 [2024-07-21 11:47:17.355180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.023 Running I/O for 1 seconds... 00:23:49.978 00:23:49.978 Latency(us) 00:23:49.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.978 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.978 Verification LBA range: start 0x0 length 0x400 00:23:49.978 Nvme1n1 : 1.10 733.42 45.84 0.00 0.00 86287.90 7444.89 121634.82 00:23:49.978 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.978 Verification LBA range: start 0x0 length 0x400 00:23:49.978 Nvme2n1 : 1.11 746.32 46.65 0.00 0.00 84163.00 7707.03 75497.47 00:23:49.978 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.978 Verification LBA range: start 0x0 length 0x400 00:23:49.978 Nvme3n1 : 1.11 745.65 46.60 0.00 0.00 83700.93 7916.75 74239.18 00:23:49.978 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.978 Verification LBA range: start 0x0 length 0x400 00:23:49.978 Nvme4n1 : 1.11 744.98 46.56 0.00 0.00 83306.15 8074.04 72561.46 00:23:49.978 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.978 Verification LBA range: start 0x0 length 0x400 00:23:49.978 Nvme5n1 : 1.11 744.31 46.52 0.00 0.00 82912.33 8283.75 71303.17 00:23:49.978 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.978 Verification LBA range: start 0x0 length 0x400 00:23:49.978 Nvme6n1 : 1.11 743.64 46.48 0.00 0.00 82478.54 8493.47 70883.74 00:23:49.978 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.978 Verification LBA range: start 0x0 length 0x400 00:23:49.978 Nvme7n1 : 1.11 742.97 46.44 0.00 0.00 82061.59 8703.18 72561.46 00:23:49.978 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.978 Verification LBA range: start 0x0 length 0x400 00:23:49.978 Nvme8n1 : 1.11 742.29 46.39 0.00 0.00 81636.12 8912.90 74239.18 00:23:49.978 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.978 Verification LBA range: start 0x0 length 0x400 00:23:49.978 Nvme9n1 : 1.11 741.63 46.35 0.00 0.00 81220.12 9122.61 75497.47 00:23:49.978 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.978 Verification LBA range: start 0x0 length 0x400 00:23:49.978 Nvme10n1 : 1.11 547.91 34.24 0.00 0.00 109079.32 7654.60 333866.60 00:23:49.978 =================================================================================================================== 00:23:49.978 Total : 7233.14 452.07 0.00 0.00 85056.31 7444.89 333866.60 00:23:50.254 11:47:19 -- target/shutdown.sh@93 -- # stoptarget 00:23:50.254 11:47:19 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:50.254 11:47:19 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:50.254 11:47:19 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:50.254 11:47:19 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:50.254 11:47:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:50.254 11:47:19 -- nvmf/common.sh@116 -- # sync 00:23:50.254 11:47:19 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:50.254 11:47:19 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:50.254 11:47:19 -- nvmf/common.sh@119 -- # set +e 00:23:50.254 11:47:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:50.254 11:47:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:50.254 rmmod nvme_rdma 00:23:50.254 rmmod nvme_fabrics 00:23:50.254 11:47:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:50.511 11:47:19 -- nvmf/common.sh@123 -- # set -e 00:23:50.511 11:47:19 -- nvmf/common.sh@124 -- # return 0 00:23:50.511 11:47:19 -- nvmf/common.sh@477 -- # '[' -n 2441192 ']' 00:23:50.511 11:47:19 -- nvmf/common.sh@478 -- # killprocess 2441192 00:23:50.511 11:47:19 -- common/autotest_common.sh@926 -- # '[' -z 2441192 ']' 00:23:50.511 11:47:19 -- common/autotest_common.sh@930 -- # kill -0 2441192 00:23:50.511 11:47:19 -- common/autotest_common.sh@931 -- # uname 00:23:50.511 11:47:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:50.511 11:47:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2441192 00:23:50.511 11:47:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:50.511 11:47:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:50.511 11:47:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2441192' 00:23:50.511 killing process with pid 2441192 00:23:50.511 11:47:19 -- common/autotest_common.sh@945 -- # kill 2441192 00:23:50.511 11:47:19 -- common/autotest_common.sh@950 -- # wait 2441192 00:23:50.769 11:47:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:50.769 11:47:20 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:50.769 00:23:50.769 real 0m15.660s 00:23:50.769 user 0m33.540s 00:23:50.769 sys 0m7.617s 00:23:50.769 11:47:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:50.769 11:47:20 -- common/autotest_common.sh@10 -- # set +x 00:23:50.769 ************************************ 00:23:50.769 END TEST nvmf_shutdown_tc1 00:23:50.769 ************************************ 00:23:51.026 11:47:20 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:51.026 11:47:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:51.026 11:47:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:51.026 11:47:20 -- common/autotest_common.sh@10 -- # set +x 00:23:51.026 ************************************ 00:23:51.026 START TEST nvmf_shutdown_tc2 00:23:51.026 ************************************ 00:23:51.026 11:47:20 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:23:51.026 11:47:20 -- target/shutdown.sh@98 -- # starttarget 00:23:51.026 11:47:20 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:51.026 11:47:20 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:51.026 11:47:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.026 11:47:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:51.026 11:47:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:51.026 11:47:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:51.026 11:47:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.026 11:47:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.026 11:47:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.026 11:47:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:51.026 11:47:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:51.027 11:47:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:51.027 11:47:20 -- common/autotest_common.sh@10 -- # set +x 00:23:51.027 11:47:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:51.027 11:47:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:51.027 11:47:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:51.027 11:47:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:51.027 11:47:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:51.027 11:47:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:51.027 11:47:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:51.027 11:47:20 -- nvmf/common.sh@294 -- # net_devs=() 00:23:51.027 11:47:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:51.027 11:47:20 -- nvmf/common.sh@295 -- # e810=() 00:23:51.027 11:47:20 -- nvmf/common.sh@295 -- # local -ga e810 00:23:51.027 11:47:20 -- nvmf/common.sh@296 -- # x722=() 00:23:51.027 11:47:20 -- nvmf/common.sh@296 -- # local -ga x722 00:23:51.027 11:47:20 -- nvmf/common.sh@297 -- # mlx=() 00:23:51.027 11:47:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:51.027 11:47:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.027 11:47:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.027 11:47:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.027 11:47:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.027 11:47:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.027 11:47:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.027 11:47:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.027 11:47:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.027 11:47:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.027 11:47:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.027 11:47:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.027 11:47:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:51.027 11:47:20 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:51.027 11:47:20 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:51.027 11:47:20 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:51.027 11:47:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:51.027 11:47:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:51.027 11:47:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:51.027 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:51.027 11:47:20 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:51.027 11:47:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:51.027 11:47:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:51.027 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:51.027 11:47:20 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:51.027 11:47:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:51.027 11:47:20 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:51.027 11:47:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.027 11:47:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:51.027 11:47:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.027 11:47:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:51.027 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:51.027 11:47:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.027 11:47:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:51.027 11:47:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.027 11:47:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:51.027 11:47:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.027 11:47:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:51.027 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:51.027 11:47:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.027 11:47:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:51.027 11:47:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:51.027 11:47:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:51.027 11:47:20 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:51.027 11:47:20 -- nvmf/common.sh@57 -- # uname 00:23:51.027 11:47:20 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:51.027 11:47:20 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:51.027 11:47:20 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:51.027 11:47:20 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:51.027 11:47:20 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:51.027 11:47:20 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:51.027 11:47:20 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:51.027 11:47:20 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:51.027 11:47:20 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:51.027 11:47:20 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:51.027 11:47:20 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:51.027 11:47:20 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:51.027 11:47:20 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:51.027 11:47:20 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:51.027 11:47:20 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:51.027 11:47:20 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:51.027 11:47:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:51.027 11:47:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.027 11:47:20 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:51.027 11:47:20 -- nvmf/common.sh@104 -- # continue 2 00:23:51.027 11:47:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:51.027 11:47:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.027 11:47:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.027 11:47:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:51.027 11:47:20 -- nvmf/common.sh@104 -- # continue 2 00:23:51.027 11:47:20 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:51.027 11:47:20 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:51.027 11:47:20 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:51.027 11:47:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:51.027 11:47:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:51.027 11:47:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:51.027 11:47:20 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:51.027 11:47:20 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:51.027 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:51.027 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:51.027 altname enp217s0f0np0 00:23:51.027 altname ens818f0np0 00:23:51.027 inet 192.168.100.8/24 scope global mlx_0_0 00:23:51.027 valid_lft forever preferred_lft forever 00:23:51.027 11:47:20 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:51.027 11:47:20 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:51.027 11:47:20 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:51.027 11:47:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:51.027 11:47:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:51.027 11:47:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:51.027 11:47:20 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:51.027 11:47:20 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:51.027 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:51.027 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:51.027 altname enp217s0f1np1 00:23:51.027 altname ens818f1np1 00:23:51.027 inet 192.168.100.9/24 scope global mlx_0_1 00:23:51.027 valid_lft forever preferred_lft forever 00:23:51.027 11:47:20 -- nvmf/common.sh@410 -- # return 0 00:23:51.027 11:47:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:51.027 11:47:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:51.027 11:47:20 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:51.027 11:47:20 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:51.027 11:47:20 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:51.027 11:47:20 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:51.027 11:47:20 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:51.027 11:47:20 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:51.027 11:47:20 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:51.027 11:47:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:51.027 11:47:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.027 11:47:20 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:51.027 11:47:20 -- nvmf/common.sh@104 -- # continue 2 00:23:51.027 11:47:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:51.027 11:47:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.027 11:47:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.027 11:47:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:51.027 11:47:20 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:51.027 11:47:20 -- nvmf/common.sh@104 -- # continue 2 00:23:51.027 11:47:20 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:51.027 11:47:20 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:51.027 11:47:20 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:51.027 11:47:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:51.027 11:47:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:51.027 11:47:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:51.285 11:47:20 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:51.285 11:47:20 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:51.285 11:47:20 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:51.285 11:47:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:51.285 11:47:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:51.285 11:47:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:51.285 11:47:20 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:51.285 192.168.100.9' 00:23:51.285 11:47:20 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:51.285 192.168.100.9' 00:23:51.285 11:47:20 -- nvmf/common.sh@445 -- # head -n 1 00:23:51.285 11:47:20 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:51.285 11:47:20 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:51.285 192.168.100.9' 00:23:51.285 11:47:20 -- nvmf/common.sh@446 -- # tail -n +2 00:23:51.285 11:47:20 -- nvmf/common.sh@446 -- # head -n 1 00:23:51.285 11:47:20 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:51.285 11:47:20 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:51.285 11:47:20 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:51.285 11:47:20 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:51.285 11:47:20 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:51.285 11:47:20 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:51.285 11:47:20 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:51.285 11:47:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:51.285 11:47:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:51.285 11:47:20 -- common/autotest_common.sh@10 -- # set +x 00:23:51.285 11:47:20 -- nvmf/common.sh@469 -- # nvmfpid=2442502 00:23:51.285 11:47:20 -- nvmf/common.sh@470 -- # waitforlisten 2442502 00:23:51.285 11:47:20 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:51.285 11:47:20 -- common/autotest_common.sh@819 -- # '[' -z 2442502 ']' 00:23:51.285 11:47:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.285 11:47:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:51.285 11:47:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.285 11:47:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:51.285 11:47:20 -- common/autotest_common.sh@10 -- # set +x 00:23:51.285 [2024-07-21 11:47:20.568665] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:51.285 [2024-07-21 11:47:20.568720] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.285 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.285 [2024-07-21 11:47:20.658146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:51.285 [2024-07-21 11:47:20.697299] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:51.285 [2024-07-21 11:47:20.697427] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.285 [2024-07-21 11:47:20.697438] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.285 [2024-07-21 11:47:20.697447] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.285 [2024-07-21 11:47:20.697552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.285 [2024-07-21 11:47:20.697579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:51.285 [2024-07-21 11:47:20.697619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.285 [2024-07-21 11:47:20.697620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:52.214 11:47:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:52.214 11:47:21 -- common/autotest_common.sh@852 -- # return 0 00:23:52.214 11:47:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:52.214 11:47:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:52.214 11:47:21 -- common/autotest_common.sh@10 -- # set +x 00:23:52.214 11:47:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.214 11:47:21 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:52.214 11:47:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:52.214 11:47:21 -- common/autotest_common.sh@10 -- # set +x 00:23:52.214 [2024-07-21 11:47:21.448862] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10567a0/0x105ac90) succeed. 00:23:52.214 [2024-07-21 11:47:21.459249] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1057d90/0x109c320) succeed. 00:23:52.214 11:47:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:52.214 11:47:21 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:52.214 11:47:21 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:52.214 11:47:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:52.214 11:47:21 -- common/autotest_common.sh@10 -- # set +x 00:23:52.214 11:47:21 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:52.214 11:47:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.214 11:47:21 -- target/shutdown.sh@28 -- # cat 00:23:52.214 11:47:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.214 11:47:21 -- target/shutdown.sh@28 -- # cat 00:23:52.214 11:47:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.214 11:47:21 -- target/shutdown.sh@28 -- # cat 00:23:52.214 11:47:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.214 11:47:21 -- target/shutdown.sh@28 -- # cat 00:23:52.214 11:47:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.214 11:47:21 -- target/shutdown.sh@28 -- # cat 00:23:52.214 11:47:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.214 11:47:21 -- target/shutdown.sh@28 -- # cat 00:23:52.214 11:47:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.214 11:47:21 -- target/shutdown.sh@28 -- # cat 00:23:52.214 11:47:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.214 11:47:21 -- target/shutdown.sh@28 -- # cat 00:23:52.214 11:47:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.214 11:47:21 -- target/shutdown.sh@28 -- # cat 00:23:52.214 11:47:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.214 11:47:21 -- target/shutdown.sh@28 -- # cat 00:23:52.469 11:47:21 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:52.469 11:47:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:52.469 11:47:21 -- common/autotest_common.sh@10 -- # set +x 00:23:52.469 Malloc1 00:23:52.469 [2024-07-21 11:47:21.681517] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:52.469 Malloc2 00:23:52.469 Malloc3 00:23:52.469 Malloc4 00:23:52.469 Malloc5 00:23:52.724 Malloc6 00:23:52.724 Malloc7 00:23:52.724 Malloc8 00:23:52.724 Malloc9 00:23:52.724 Malloc10 00:23:52.724 11:47:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:52.724 11:47:22 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:52.724 11:47:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:52.724 11:47:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.724 11:47:22 -- target/shutdown.sh@102 -- # perfpid=2442822 00:23:52.724 11:47:22 -- target/shutdown.sh@103 -- # waitforlisten 2442822 /var/tmp/bdevperf.sock 00:23:52.724 11:47:22 -- common/autotest_common.sh@819 -- # '[' -z 2442822 ']' 00:23:52.724 11:47:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.724 11:47:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:52.724 11:47:22 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:52.724 11:47:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.724 11:47:22 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:52.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.724 11:47:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:52.724 11:47:22 -- nvmf/common.sh@520 -- # config=() 00:23:52.724 11:47:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.724 11:47:22 -- nvmf/common.sh@520 -- # local subsystem config 00:23:52.724 11:47:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:52.724 11:47:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:52.724 { 00:23:52.724 "params": { 00:23:52.725 "name": "Nvme$subsystem", 00:23:52.725 "trtype": "$TEST_TRANSPORT", 00:23:52.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.725 "adrfam": "ipv4", 00:23:52.725 "trsvcid": "$NVMF_PORT", 00:23:52.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.725 "hdgst": ${hdgst:-false}, 00:23:52.725 "ddgst": ${ddgst:-false} 00:23:52.725 }, 00:23:52.725 "method": "bdev_nvme_attach_controller" 00:23:52.725 } 00:23:52.725 EOF 00:23:52.725 )") 00:23:52.725 11:47:22 -- nvmf/common.sh@542 -- # cat 00:23:52.725 11:47:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:52.725 11:47:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:52.725 { 00:23:52.725 "params": { 00:23:52.725 "name": "Nvme$subsystem", 00:23:52.725 "trtype": "$TEST_TRANSPORT", 00:23:52.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.725 "adrfam": "ipv4", 00:23:52.725 "trsvcid": "$NVMF_PORT", 00:23:52.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.725 "hdgst": ${hdgst:-false}, 00:23:52.725 "ddgst": ${ddgst:-false} 00:23:52.725 }, 00:23:52.725 "method": "bdev_nvme_attach_controller" 00:23:52.725 } 00:23:52.725 EOF 00:23:52.725 )") 00:23:52.725 11:47:22 -- nvmf/common.sh@542 -- # cat 00:23:52.981 11:47:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:52.981 11:47:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:52.981 { 00:23:52.981 "params": { 00:23:52.981 "name": "Nvme$subsystem", 00:23:52.981 "trtype": "$TEST_TRANSPORT", 00:23:52.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.981 "adrfam": "ipv4", 00:23:52.981 "trsvcid": "$NVMF_PORT", 00:23:52.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.981 "hdgst": ${hdgst:-false}, 00:23:52.981 "ddgst": ${ddgst:-false} 00:23:52.981 }, 00:23:52.981 "method": "bdev_nvme_attach_controller" 00:23:52.981 } 00:23:52.981 EOF 00:23:52.981 )") 00:23:52.981 11:47:22 -- nvmf/common.sh@542 -- # cat 00:23:52.981 11:47:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:52.981 11:47:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:52.981 { 00:23:52.981 "params": { 00:23:52.981 "name": "Nvme$subsystem", 00:23:52.981 "trtype": "$TEST_TRANSPORT", 00:23:52.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.981 "adrfam": "ipv4", 00:23:52.981 "trsvcid": "$NVMF_PORT", 00:23:52.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.981 "hdgst": ${hdgst:-false}, 00:23:52.981 "ddgst": ${ddgst:-false} 00:23:52.981 }, 00:23:52.981 "method": "bdev_nvme_attach_controller" 00:23:52.981 } 00:23:52.981 EOF 00:23:52.981 )") 00:23:52.981 11:47:22 -- nvmf/common.sh@542 -- # cat 00:23:52.981 11:47:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:52.981 11:47:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:52.981 { 00:23:52.981 "params": { 00:23:52.981 "name": "Nvme$subsystem", 00:23:52.981 "trtype": "$TEST_TRANSPORT", 00:23:52.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.981 "adrfam": "ipv4", 00:23:52.981 "trsvcid": "$NVMF_PORT", 00:23:52.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.981 "hdgst": ${hdgst:-false}, 00:23:52.981 "ddgst": ${ddgst:-false} 00:23:52.981 }, 00:23:52.981 "method": "bdev_nvme_attach_controller" 00:23:52.981 } 00:23:52.981 EOF 00:23:52.981 )") 00:23:52.981 11:47:22 -- nvmf/common.sh@542 -- # cat 00:23:52.981 [2024-07-21 11:47:22.175212] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:52.981 [2024-07-21 11:47:22.175269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2442822 ] 00:23:52.981 11:47:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:52.981 11:47:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:52.981 { 00:23:52.981 "params": { 00:23:52.981 "name": "Nvme$subsystem", 00:23:52.981 "trtype": "$TEST_TRANSPORT", 00:23:52.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.981 "adrfam": "ipv4", 00:23:52.981 "trsvcid": "$NVMF_PORT", 00:23:52.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.981 "hdgst": ${hdgst:-false}, 00:23:52.981 "ddgst": ${ddgst:-false} 00:23:52.981 }, 00:23:52.981 "method": "bdev_nvme_attach_controller" 00:23:52.981 } 00:23:52.981 EOF 00:23:52.981 )") 00:23:52.981 11:47:22 -- nvmf/common.sh@542 -- # cat 00:23:52.981 11:47:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:52.981 11:47:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:52.981 { 00:23:52.981 "params": { 00:23:52.981 "name": "Nvme$subsystem", 00:23:52.981 "trtype": "$TEST_TRANSPORT", 00:23:52.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.981 "adrfam": "ipv4", 00:23:52.981 "trsvcid": "$NVMF_PORT", 00:23:52.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.981 "hdgst": ${hdgst:-false}, 00:23:52.981 "ddgst": ${ddgst:-false} 00:23:52.981 }, 00:23:52.981 "method": "bdev_nvme_attach_controller" 00:23:52.981 } 00:23:52.981 EOF 00:23:52.981 )") 00:23:52.981 11:47:22 -- nvmf/common.sh@542 -- # cat 00:23:52.981 11:47:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:52.981 11:47:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:52.981 { 00:23:52.981 "params": { 00:23:52.981 "name": "Nvme$subsystem", 00:23:52.981 "trtype": "$TEST_TRANSPORT", 00:23:52.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.981 "adrfam": "ipv4", 00:23:52.981 "trsvcid": "$NVMF_PORT", 00:23:52.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.981 "hdgst": ${hdgst:-false}, 00:23:52.981 "ddgst": ${ddgst:-false} 00:23:52.981 }, 00:23:52.981 "method": "bdev_nvme_attach_controller" 00:23:52.981 } 00:23:52.981 EOF 00:23:52.981 )") 00:23:52.981 11:47:22 -- nvmf/common.sh@542 -- # cat 00:23:52.981 11:47:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:52.981 11:47:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:52.981 { 00:23:52.981 "params": { 00:23:52.981 "name": "Nvme$subsystem", 00:23:52.981 "trtype": "$TEST_TRANSPORT", 00:23:52.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.981 "adrfam": "ipv4", 00:23:52.981 "trsvcid": "$NVMF_PORT", 00:23:52.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.981 "hdgst": ${hdgst:-false}, 00:23:52.981 "ddgst": ${ddgst:-false} 00:23:52.981 }, 00:23:52.981 "method": "bdev_nvme_attach_controller" 00:23:52.981 } 00:23:52.981 EOF 00:23:52.981 )") 00:23:52.981 11:47:22 -- nvmf/common.sh@542 -- # cat 00:23:52.981 11:47:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:52.981 11:47:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:52.981 { 00:23:52.981 "params": { 00:23:52.981 "name": "Nvme$subsystem", 00:23:52.981 "trtype": "$TEST_TRANSPORT", 00:23:52.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.981 "adrfam": "ipv4", 00:23:52.981 "trsvcid": "$NVMF_PORT", 00:23:52.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.981 "hdgst": ${hdgst:-false}, 00:23:52.981 "ddgst": ${ddgst:-false} 00:23:52.981 }, 00:23:52.981 "method": "bdev_nvme_attach_controller" 00:23:52.981 } 00:23:52.981 EOF 00:23:52.981 )") 00:23:52.981 11:47:22 -- nvmf/common.sh@542 -- # cat 00:23:52.981 11:47:22 -- nvmf/common.sh@544 -- # jq . 00:23:52.981 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.981 11:47:22 -- nvmf/common.sh@545 -- # IFS=, 00:23:52.981 11:47:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:52.981 "params": { 00:23:52.981 "name": "Nvme1", 00:23:52.981 "trtype": "rdma", 00:23:52.981 "traddr": "192.168.100.8", 00:23:52.981 "adrfam": "ipv4", 00:23:52.981 "trsvcid": "4420", 00:23:52.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.982 "hdgst": false, 00:23:52.982 "ddgst": false 00:23:52.982 }, 00:23:52.982 "method": "bdev_nvme_attach_controller" 00:23:52.982 },{ 00:23:52.982 "params": { 00:23:52.982 "name": "Nvme2", 00:23:52.982 "trtype": "rdma", 00:23:52.982 "traddr": "192.168.100.8", 00:23:52.982 "adrfam": "ipv4", 00:23:52.982 "trsvcid": "4420", 00:23:52.982 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:52.982 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:52.982 "hdgst": false, 00:23:52.982 "ddgst": false 00:23:52.982 }, 00:23:52.982 "method": "bdev_nvme_attach_controller" 00:23:52.982 },{ 00:23:52.982 "params": { 00:23:52.982 "name": "Nvme3", 00:23:52.982 "trtype": "rdma", 00:23:52.982 "traddr": "192.168.100.8", 00:23:52.982 "adrfam": "ipv4", 00:23:52.982 "trsvcid": "4420", 00:23:52.982 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:52.982 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:52.982 "hdgst": false, 00:23:52.982 "ddgst": false 00:23:52.982 }, 00:23:52.982 "method": "bdev_nvme_attach_controller" 00:23:52.982 },{ 00:23:52.982 "params": { 00:23:52.982 "name": "Nvme4", 00:23:52.982 "trtype": "rdma", 00:23:52.982 "traddr": "192.168.100.8", 00:23:52.982 "adrfam": "ipv4", 00:23:52.982 "trsvcid": "4420", 00:23:52.982 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:52.982 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:52.982 "hdgst": false, 00:23:52.982 "ddgst": false 00:23:52.982 }, 00:23:52.982 "method": "bdev_nvme_attach_controller" 00:23:52.982 },{ 00:23:52.982 "params": { 00:23:52.982 "name": "Nvme5", 00:23:52.982 "trtype": "rdma", 00:23:52.982 "traddr": "192.168.100.8", 00:23:52.982 "adrfam": "ipv4", 00:23:52.982 "trsvcid": "4420", 00:23:52.982 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:52.982 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:52.982 "hdgst": false, 00:23:52.982 "ddgst": false 00:23:52.982 }, 00:23:52.982 "method": "bdev_nvme_attach_controller" 00:23:52.982 },{ 00:23:52.982 "params": { 00:23:52.982 "name": "Nvme6", 00:23:52.982 "trtype": "rdma", 00:23:52.982 "traddr": "192.168.100.8", 00:23:52.982 "adrfam": "ipv4", 00:23:52.982 "trsvcid": "4420", 00:23:52.982 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:52.982 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:52.982 "hdgst": false, 00:23:52.982 "ddgst": false 00:23:52.982 }, 00:23:52.982 "method": "bdev_nvme_attach_controller" 00:23:52.982 },{ 00:23:52.982 "params": { 00:23:52.982 "name": "Nvme7", 00:23:52.982 "trtype": "rdma", 00:23:52.982 "traddr": "192.168.100.8", 00:23:52.982 "adrfam": "ipv4", 00:23:52.982 "trsvcid": "4420", 00:23:52.982 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:52.982 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:52.982 "hdgst": false, 00:23:52.982 "ddgst": false 00:23:52.982 }, 00:23:52.982 "method": "bdev_nvme_attach_controller" 00:23:52.982 },{ 00:23:52.982 "params": { 00:23:52.982 "name": "Nvme8", 00:23:52.982 "trtype": "rdma", 00:23:52.982 "traddr": "192.168.100.8", 00:23:52.982 "adrfam": "ipv4", 00:23:52.982 "trsvcid": "4420", 00:23:52.982 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:52.982 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:52.982 "hdgst": false, 00:23:52.982 "ddgst": false 00:23:52.982 }, 00:23:52.982 "method": "bdev_nvme_attach_controller" 00:23:52.982 },{ 00:23:52.982 "params": { 00:23:52.982 "name": "Nvme9", 00:23:52.982 "trtype": "rdma", 00:23:52.982 "traddr": "192.168.100.8", 00:23:52.982 "adrfam": "ipv4", 00:23:52.982 "trsvcid": "4420", 00:23:52.982 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:52.982 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:52.982 "hdgst": false, 00:23:52.982 "ddgst": false 00:23:52.982 }, 00:23:52.982 "method": "bdev_nvme_attach_controller" 00:23:52.982 },{ 00:23:52.982 "params": { 00:23:52.982 "name": "Nvme10", 00:23:52.982 "trtype": "rdma", 00:23:52.982 "traddr": "192.168.100.8", 00:23:52.982 "adrfam": "ipv4", 00:23:52.982 "trsvcid": "4420", 00:23:52.982 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:52.982 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:52.982 "hdgst": false, 00:23:52.982 "ddgst": false 00:23:52.982 }, 00:23:52.982 "method": "bdev_nvme_attach_controller" 00:23:52.982 }' 00:23:52.982 [2024-07-21 11:47:22.263979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.982 [2024-07-21 11:47:22.300112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.909 Running I/O for 10 seconds... 00:23:54.472 11:47:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:54.472 11:47:23 -- common/autotest_common.sh@852 -- # return 0 00:23:54.472 11:47:23 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:54.472 11:47:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.472 11:47:23 -- common/autotest_common.sh@10 -- # set +x 00:23:54.472 11:47:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.472 11:47:23 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:54.472 11:47:23 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:54.472 11:47:23 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:54.472 11:47:23 -- target/shutdown.sh@57 -- # local ret=1 00:23:54.472 11:47:23 -- target/shutdown.sh@58 -- # local i 00:23:54.472 11:47:23 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:54.472 11:47:23 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:54.472 11:47:23 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:54.472 11:47:23 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:54.472 11:47:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.472 11:47:23 -- common/autotest_common.sh@10 -- # set +x 00:23:54.728 11:47:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.728 11:47:23 -- target/shutdown.sh@60 -- # read_io_count=446 00:23:54.728 11:47:23 -- target/shutdown.sh@63 -- # '[' 446 -ge 100 ']' 00:23:54.728 11:47:23 -- target/shutdown.sh@64 -- # ret=0 00:23:54.728 11:47:23 -- target/shutdown.sh@65 -- # break 00:23:54.728 11:47:23 -- target/shutdown.sh@69 -- # return 0 00:23:54.728 11:47:23 -- target/shutdown.sh@109 -- # killprocess 2442822 00:23:54.728 11:47:23 -- common/autotest_common.sh@926 -- # '[' -z 2442822 ']' 00:23:54.728 11:47:23 -- common/autotest_common.sh@930 -- # kill -0 2442822 00:23:54.728 11:47:23 -- common/autotest_common.sh@931 -- # uname 00:23:54.728 11:47:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:54.728 11:47:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2442822 00:23:54.728 11:47:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:54.728 11:47:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:54.728 11:47:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2442822' 00:23:54.728 killing process with pid 2442822 00:23:54.728 11:47:23 -- common/autotest_common.sh@945 -- # kill 2442822 00:23:54.728 11:47:23 -- common/autotest_common.sh@950 -- # wait 2442822 00:23:54.728 Received shutdown signal, test time was about 0.886742 seconds 00:23:54.728 00:23:54.728 Latency(us) 00:23:54.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.728 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.728 Verification LBA range: start 0x0 length 0x400 00:23:54.728 Nvme1n1 : 0.88 724.27 45.27 0.00 0.00 87053.11 7287.60 120795.96 00:23:54.728 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.728 Verification LBA range: start 0x0 length 0x400 00:23:54.728 Nvme2n1 : 0.88 747.32 46.71 0.00 0.00 83733.19 7549.75 75078.04 00:23:54.728 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.728 Verification LBA range: start 0x0 length 0x400 00:23:54.729 Nvme3n1 : 0.88 746.44 46.65 0.00 0.00 83164.94 7811.89 72142.03 00:23:54.729 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.729 Verification LBA range: start 0x0 length 0x400 00:23:54.729 Nvme4n1 : 0.88 745.56 46.60 0.00 0.00 82661.84 8074.04 70464.31 00:23:54.729 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.729 Verification LBA range: start 0x0 length 0x400 00:23:54.729 Nvme5n1 : 0.88 744.67 46.54 0.00 0.00 82161.20 8336.18 70044.88 00:23:54.729 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.729 Verification LBA range: start 0x0 length 0x400 00:23:54.729 Nvme6n1 : 0.88 743.79 46.49 0.00 0.00 81641.95 8545.89 71722.60 00:23:54.729 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.729 Verification LBA range: start 0x0 length 0x400 00:23:54.729 Nvme7n1 : 0.88 742.90 46.43 0.00 0.00 81113.83 8860.47 73400.32 00:23:54.729 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.729 Verification LBA range: start 0x0 length 0x400 00:23:54.729 Nvme8n1 : 0.88 742.03 46.38 0.00 0.00 80610.40 9070.18 74658.61 00:23:54.729 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.729 Verification LBA range: start 0x0 length 0x400 00:23:54.729 Nvme9n1 : 0.89 741.14 46.32 0.00 0.00 80088.19 9332.33 76336.33 00:23:54.729 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.729 Verification LBA range: start 0x0 length 0x400 00:23:54.729 Nvme10n1 : 0.89 489.86 30.62 0.00 0.00 119841.42 7654.60 335544.32 00:23:54.729 =================================================================================================================== 00:23:54.729 Total : 7167.98 448.00 0.00 0.00 85022.17 7287.60 335544.32 00:23:54.985 11:47:24 -- target/shutdown.sh@112 -- # sleep 1 00:23:55.924 11:47:25 -- target/shutdown.sh@113 -- # kill -0 2442502 00:23:55.925 11:47:25 -- target/shutdown.sh@115 -- # stoptarget 00:23:55.925 11:47:25 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:55.925 11:47:25 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:55.925 11:47:25 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:55.925 11:47:25 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:55.925 11:47:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:55.925 11:47:25 -- nvmf/common.sh@116 -- # sync 00:23:55.925 11:47:25 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:55.925 11:47:25 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:55.925 11:47:25 -- nvmf/common.sh@119 -- # set +e 00:23:56.196 11:47:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:56.196 11:47:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:56.196 rmmod nvme_rdma 00:23:56.196 rmmod nvme_fabrics 00:23:56.196 11:47:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:56.196 11:47:25 -- nvmf/common.sh@123 -- # set -e 00:23:56.196 11:47:25 -- nvmf/common.sh@124 -- # return 0 00:23:56.196 11:47:25 -- nvmf/common.sh@477 -- # '[' -n 2442502 ']' 00:23:56.196 11:47:25 -- nvmf/common.sh@478 -- # killprocess 2442502 00:23:56.196 11:47:25 -- common/autotest_common.sh@926 -- # '[' -z 2442502 ']' 00:23:56.196 11:47:25 -- common/autotest_common.sh@930 -- # kill -0 2442502 00:23:56.196 11:47:25 -- common/autotest_common.sh@931 -- # uname 00:23:56.196 11:47:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:56.196 11:47:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2442502 00:23:56.196 11:47:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:56.196 11:47:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:56.196 11:47:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2442502' 00:23:56.196 killing process with pid 2442502 00:23:56.196 11:47:25 -- common/autotest_common.sh@945 -- # kill 2442502 00:23:56.196 11:47:25 -- common/autotest_common.sh@950 -- # wait 2442502 00:23:56.763 11:47:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:56.763 11:47:25 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:56.763 00:23:56.763 real 0m5.659s 00:23:56.763 user 0m22.751s 00:23:56.763 sys 0m1.271s 00:23:56.763 11:47:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:56.763 11:47:25 -- common/autotest_common.sh@10 -- # set +x 00:23:56.763 ************************************ 00:23:56.763 END TEST nvmf_shutdown_tc2 00:23:56.763 ************************************ 00:23:56.763 11:47:25 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:56.763 11:47:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:56.763 11:47:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:56.763 11:47:25 -- common/autotest_common.sh@10 -- # set +x 00:23:56.763 ************************************ 00:23:56.763 START TEST nvmf_shutdown_tc3 00:23:56.763 ************************************ 00:23:56.763 11:47:25 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:23:56.763 11:47:25 -- target/shutdown.sh@120 -- # starttarget 00:23:56.763 11:47:25 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:56.763 11:47:25 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:56.763 11:47:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.763 11:47:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:56.763 11:47:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:56.763 11:47:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:56.763 11:47:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.763 11:47:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.763 11:47:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.763 11:47:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:56.763 11:47:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:56.763 11:47:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:56.763 11:47:25 -- common/autotest_common.sh@10 -- # set +x 00:23:56.763 11:47:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:56.763 11:47:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:56.763 11:47:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:56.763 11:47:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:56.763 11:47:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:56.763 11:47:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:56.763 11:47:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:56.763 11:47:25 -- nvmf/common.sh@294 -- # net_devs=() 00:23:56.763 11:47:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:56.763 11:47:25 -- nvmf/common.sh@295 -- # e810=() 00:23:56.763 11:47:25 -- nvmf/common.sh@295 -- # local -ga e810 00:23:56.763 11:47:25 -- nvmf/common.sh@296 -- # x722=() 00:23:56.763 11:47:25 -- nvmf/common.sh@296 -- # local -ga x722 00:23:56.763 11:47:25 -- nvmf/common.sh@297 -- # mlx=() 00:23:56.763 11:47:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:56.763 11:47:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.763 11:47:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.763 11:47:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.763 11:47:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.763 11:47:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.763 11:47:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.763 11:47:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.763 11:47:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.763 11:47:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.763 11:47:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.763 11:47:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.763 11:47:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:56.763 11:47:25 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:56.763 11:47:25 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:56.763 11:47:25 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:56.763 11:47:25 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:56.763 11:47:25 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:56.763 11:47:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:56.763 11:47:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:56.763 11:47:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:56.763 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:56.763 11:47:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:56.763 11:47:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:56.763 11:47:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:56.763 11:47:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:56.763 11:47:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:56.763 11:47:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:56.763 11:47:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:56.763 11:47:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:56.763 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:56.763 11:47:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:56.763 11:47:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:56.763 11:47:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:56.763 11:47:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:56.763 11:47:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:56.763 11:47:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:56.763 11:47:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:56.763 11:47:25 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:56.763 11:47:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:56.763 11:47:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.763 11:47:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:56.763 11:47:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.763 11:47:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:56.763 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:56.763 11:47:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.763 11:47:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:56.763 11:47:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.763 11:47:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:56.763 11:47:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.763 11:47:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:56.763 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:56.763 11:47:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.763 11:47:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:56.763 11:47:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:56.763 11:47:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:56.763 11:47:25 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:56.763 11:47:25 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:56.763 11:47:25 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:56.763 11:47:25 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:56.763 11:47:25 -- nvmf/common.sh@57 -- # uname 00:23:56.763 11:47:25 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:56.763 11:47:25 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:56.763 11:47:25 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:56.763 11:47:25 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:56.763 11:47:25 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:56.763 11:47:25 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:56.763 11:47:25 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:56.763 11:47:25 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:56.763 11:47:26 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:56.763 11:47:26 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:56.763 11:47:26 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:56.763 11:47:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:56.763 11:47:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:56.763 11:47:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:56.763 11:47:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:56.763 11:47:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:56.763 11:47:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:56.763 11:47:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.763 11:47:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:56.763 11:47:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:56.763 11:47:26 -- nvmf/common.sh@104 -- # continue 2 00:23:56.763 11:47:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:56.763 11:47:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.763 11:47:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:56.763 11:47:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.763 11:47:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:56.763 11:47:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:56.763 11:47:26 -- nvmf/common.sh@104 -- # continue 2 00:23:56.763 11:47:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:56.763 11:47:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:56.763 11:47:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:56.763 11:47:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:56.763 11:47:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:56.763 11:47:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:56.763 11:47:26 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:56.763 11:47:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:56.763 11:47:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:56.763 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:56.763 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:56.763 altname enp217s0f0np0 00:23:56.763 altname ens818f0np0 00:23:56.763 inet 192.168.100.8/24 scope global mlx_0_0 00:23:56.763 valid_lft forever preferred_lft forever 00:23:56.763 11:47:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:56.763 11:47:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:56.763 11:47:26 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:56.763 11:47:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:56.763 11:47:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:56.763 11:47:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:56.763 11:47:26 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:56.763 11:47:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:56.763 11:47:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:56.763 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:56.763 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:56.763 altname enp217s0f1np1 00:23:56.763 altname ens818f1np1 00:23:56.763 inet 192.168.100.9/24 scope global mlx_0_1 00:23:56.763 valid_lft forever preferred_lft forever 00:23:56.763 11:47:26 -- nvmf/common.sh@410 -- # return 0 00:23:56.763 11:47:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:56.763 11:47:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:56.763 11:47:26 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:56.763 11:47:26 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:56.763 11:47:26 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:56.763 11:47:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:56.763 11:47:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:56.763 11:47:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:56.763 11:47:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:56.763 11:47:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:56.763 11:47:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:56.763 11:47:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.763 11:47:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:56.763 11:47:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:56.763 11:47:26 -- nvmf/common.sh@104 -- # continue 2 00:23:56.763 11:47:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:56.763 11:47:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.763 11:47:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:56.763 11:47:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.763 11:47:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:56.763 11:47:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:56.763 11:47:26 -- nvmf/common.sh@104 -- # continue 2 00:23:56.763 11:47:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:56.763 11:47:26 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:56.763 11:47:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:56.763 11:47:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:56.763 11:47:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:56.763 11:47:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:56.763 11:47:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:56.763 11:47:26 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:56.763 11:47:26 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:56.763 11:47:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:56.763 11:47:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:56.763 11:47:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:56.763 11:47:26 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:56.763 192.168.100.9' 00:23:56.763 11:47:26 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:56.763 192.168.100.9' 00:23:56.763 11:47:26 -- nvmf/common.sh@445 -- # head -n 1 00:23:56.763 11:47:26 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:56.763 11:47:26 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:56.763 192.168.100.9' 00:23:56.763 11:47:26 -- nvmf/common.sh@446 -- # tail -n +2 00:23:56.763 11:47:26 -- nvmf/common.sh@446 -- # head -n 1 00:23:56.763 11:47:26 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:56.763 11:47:26 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:56.763 11:47:26 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:56.763 11:47:26 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:56.763 11:47:26 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:56.763 11:47:26 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:56.763 11:47:26 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:56.763 11:47:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:56.763 11:47:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:56.763 11:47:26 -- common/autotest_common.sh@10 -- # set +x 00:23:56.763 11:47:26 -- nvmf/common.sh@469 -- # nvmfpid=2443706 00:23:56.763 11:47:26 -- nvmf/common.sh@470 -- # waitforlisten 2443706 00:23:56.763 11:47:26 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:57.021 11:47:26 -- common/autotest_common.sh@819 -- # '[' -z 2443706 ']' 00:23:57.021 11:47:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.021 11:47:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:57.021 11:47:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.021 11:47:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:57.021 11:47:26 -- common/autotest_common.sh@10 -- # set +x 00:23:57.021 [2024-07-21 11:47:26.224485] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:57.021 [2024-07-21 11:47:26.224534] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.021 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.021 [2024-07-21 11:47:26.309967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.021 [2024-07-21 11:47:26.348920] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:57.021 [2024-07-21 11:47:26.349023] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.021 [2024-07-21 11:47:26.349034] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.021 [2024-07-21 11:47:26.349043] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.021 [2024-07-21 11:47:26.349147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.021 [2024-07-21 11:47:26.349175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.021 [2024-07-21 11:47:26.349215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.021 [2024-07-21 11:47:26.349216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:57.955 11:47:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:57.955 11:47:27 -- common/autotest_common.sh@852 -- # return 0 00:23:57.955 11:47:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:57.955 11:47:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:57.955 11:47:27 -- common/autotest_common.sh@10 -- # set +x 00:23:57.955 11:47:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.955 11:47:27 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:57.955 11:47:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:57.955 11:47:27 -- common/autotest_common.sh@10 -- # set +x 00:23:57.955 [2024-07-21 11:47:27.098920] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18957a0/0x1899c90) succeed. 00:23:57.955 [2024-07-21 11:47:27.109213] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1896d90/0x18db320) succeed. 00:23:57.955 11:47:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:57.955 11:47:27 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:57.955 11:47:27 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:57.955 11:47:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:57.955 11:47:27 -- common/autotest_common.sh@10 -- # set +x 00:23:57.955 11:47:27 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:57.955 11:47:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.955 11:47:27 -- target/shutdown.sh@28 -- # cat 00:23:57.955 11:47:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.955 11:47:27 -- target/shutdown.sh@28 -- # cat 00:23:57.955 11:47:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.955 11:47:27 -- target/shutdown.sh@28 -- # cat 00:23:57.955 11:47:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.955 11:47:27 -- target/shutdown.sh@28 -- # cat 00:23:57.955 11:47:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.955 11:47:27 -- target/shutdown.sh@28 -- # cat 00:23:57.955 11:47:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.955 11:47:27 -- target/shutdown.sh@28 -- # cat 00:23:57.955 11:47:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.955 11:47:27 -- target/shutdown.sh@28 -- # cat 00:23:57.955 11:47:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.955 11:47:27 -- target/shutdown.sh@28 -- # cat 00:23:57.955 11:47:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.955 11:47:27 -- target/shutdown.sh@28 -- # cat 00:23:57.955 11:47:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.955 11:47:27 -- target/shutdown.sh@28 -- # cat 00:23:57.955 11:47:27 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:57.955 11:47:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:57.955 11:47:27 -- common/autotest_common.sh@10 -- # set +x 00:23:57.955 Malloc1 00:23:57.955 [2024-07-21 11:47:27.331374] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:57.955 Malloc2 00:23:58.214 Malloc3 00:23:58.214 Malloc4 00:23:58.214 Malloc5 00:23:58.214 Malloc6 00:23:58.214 Malloc7 00:23:58.214 Malloc8 00:23:58.491 Malloc9 00:23:58.491 Malloc10 00:23:58.491 11:47:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.491 11:47:27 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:58.491 11:47:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:58.491 11:47:27 -- common/autotest_common.sh@10 -- # set +x 00:23:58.491 11:47:27 -- target/shutdown.sh@124 -- # perfpid=2444025 00:23:58.491 11:47:27 -- target/shutdown.sh@125 -- # waitforlisten 2444025 /var/tmp/bdevperf.sock 00:23:58.491 11:47:27 -- common/autotest_common.sh@819 -- # '[' -z 2444025 ']' 00:23:58.491 11:47:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.491 11:47:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:58.491 11:47:27 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:58.491 11:47:27 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:58.491 11:47:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.491 11:47:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:58.491 11:47:27 -- nvmf/common.sh@520 -- # config=() 00:23:58.491 11:47:27 -- common/autotest_common.sh@10 -- # set +x 00:23:58.491 11:47:27 -- nvmf/common.sh@520 -- # local subsystem config 00:23:58.491 11:47:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.491 11:47:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.491 { 00:23:58.491 "params": { 00:23:58.491 "name": "Nvme$subsystem", 00:23:58.491 "trtype": "$TEST_TRANSPORT", 00:23:58.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.491 "adrfam": "ipv4", 00:23:58.491 "trsvcid": "$NVMF_PORT", 00:23:58.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.491 "hdgst": ${hdgst:-false}, 00:23:58.491 "ddgst": ${ddgst:-false} 00:23:58.491 }, 00:23:58.491 "method": "bdev_nvme_attach_controller" 00:23:58.491 } 00:23:58.491 EOF 00:23:58.491 )") 00:23:58.491 11:47:27 -- nvmf/common.sh@542 -- # cat 00:23:58.491 11:47:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.491 11:47:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.491 { 00:23:58.491 "params": { 00:23:58.491 "name": "Nvme$subsystem", 00:23:58.491 "trtype": "$TEST_TRANSPORT", 00:23:58.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.491 "adrfam": "ipv4", 00:23:58.491 "trsvcid": "$NVMF_PORT", 00:23:58.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.491 "hdgst": ${hdgst:-false}, 00:23:58.491 "ddgst": ${ddgst:-false} 00:23:58.491 }, 00:23:58.491 "method": "bdev_nvme_attach_controller" 00:23:58.491 } 00:23:58.491 EOF 00:23:58.491 )") 00:23:58.491 11:47:27 -- nvmf/common.sh@542 -- # cat 00:23:58.491 11:47:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.491 11:47:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.491 { 00:23:58.491 "params": { 00:23:58.491 "name": "Nvme$subsystem", 00:23:58.491 "trtype": "$TEST_TRANSPORT", 00:23:58.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.491 "adrfam": "ipv4", 00:23:58.491 "trsvcid": "$NVMF_PORT", 00:23:58.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.491 "hdgst": ${hdgst:-false}, 00:23:58.491 "ddgst": ${ddgst:-false} 00:23:58.491 }, 00:23:58.491 "method": "bdev_nvme_attach_controller" 00:23:58.491 } 00:23:58.491 EOF 00:23:58.491 )") 00:23:58.491 11:47:27 -- nvmf/common.sh@542 -- # cat 00:23:58.491 11:47:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.491 11:47:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.491 { 00:23:58.491 "params": { 00:23:58.491 "name": "Nvme$subsystem", 00:23:58.491 "trtype": "$TEST_TRANSPORT", 00:23:58.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.491 "adrfam": "ipv4", 00:23:58.491 "trsvcid": "$NVMF_PORT", 00:23:58.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.491 "hdgst": ${hdgst:-false}, 00:23:58.491 "ddgst": ${ddgst:-false} 00:23:58.491 }, 00:23:58.491 "method": "bdev_nvme_attach_controller" 00:23:58.491 } 00:23:58.491 EOF 00:23:58.491 )") 00:23:58.491 11:47:27 -- nvmf/common.sh@542 -- # cat 00:23:58.491 11:47:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.491 11:47:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.491 { 00:23:58.491 "params": { 00:23:58.491 "name": "Nvme$subsystem", 00:23:58.491 "trtype": "$TEST_TRANSPORT", 00:23:58.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.491 "adrfam": "ipv4", 00:23:58.491 "trsvcid": "$NVMF_PORT", 00:23:58.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.491 "hdgst": ${hdgst:-false}, 00:23:58.491 "ddgst": ${ddgst:-false} 00:23:58.491 }, 00:23:58.491 "method": "bdev_nvme_attach_controller" 00:23:58.491 } 00:23:58.491 EOF 00:23:58.491 )") 00:23:58.491 11:47:27 -- nvmf/common.sh@542 -- # cat 00:23:58.491 11:47:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.491 11:47:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.491 { 00:23:58.491 "params": { 00:23:58.491 "name": "Nvme$subsystem", 00:23:58.491 "trtype": "$TEST_TRANSPORT", 00:23:58.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.491 "adrfam": "ipv4", 00:23:58.491 "trsvcid": "$NVMF_PORT", 00:23:58.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.491 "hdgst": ${hdgst:-false}, 00:23:58.491 "ddgst": ${ddgst:-false} 00:23:58.491 }, 00:23:58.491 "method": "bdev_nvme_attach_controller" 00:23:58.491 } 00:23:58.491 EOF 00:23:58.491 )") 00:23:58.491 11:47:27 -- nvmf/common.sh@542 -- # cat 00:23:58.491 [2024-07-21 11:47:27.821288] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:58.492 [2024-07-21 11:47:27.821344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2444025 ] 00:23:58.492 11:47:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.492 11:47:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.492 { 00:23:58.492 "params": { 00:23:58.492 "name": "Nvme$subsystem", 00:23:58.492 "trtype": "$TEST_TRANSPORT", 00:23:58.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.492 "adrfam": "ipv4", 00:23:58.492 "trsvcid": "$NVMF_PORT", 00:23:58.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.492 "hdgst": ${hdgst:-false}, 00:23:58.492 "ddgst": ${ddgst:-false} 00:23:58.492 }, 00:23:58.492 "method": "bdev_nvme_attach_controller" 00:23:58.492 } 00:23:58.492 EOF 00:23:58.492 )") 00:23:58.492 11:47:27 -- nvmf/common.sh@542 -- # cat 00:23:58.492 11:47:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.492 11:47:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.492 { 00:23:58.492 "params": { 00:23:58.492 "name": "Nvme$subsystem", 00:23:58.492 "trtype": "$TEST_TRANSPORT", 00:23:58.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.492 "adrfam": "ipv4", 00:23:58.492 "trsvcid": "$NVMF_PORT", 00:23:58.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.492 "hdgst": ${hdgst:-false}, 00:23:58.492 "ddgst": ${ddgst:-false} 00:23:58.492 }, 00:23:58.492 "method": "bdev_nvme_attach_controller" 00:23:58.492 } 00:23:58.492 EOF 00:23:58.492 )") 00:23:58.492 11:47:27 -- nvmf/common.sh@542 -- # cat 00:23:58.492 11:47:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.492 11:47:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.492 { 00:23:58.492 "params": { 00:23:58.492 "name": "Nvme$subsystem", 00:23:58.492 "trtype": "$TEST_TRANSPORT", 00:23:58.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.492 "adrfam": "ipv4", 00:23:58.492 "trsvcid": "$NVMF_PORT", 00:23:58.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.492 "hdgst": ${hdgst:-false}, 00:23:58.492 "ddgst": ${ddgst:-false} 00:23:58.492 }, 00:23:58.492 "method": "bdev_nvme_attach_controller" 00:23:58.492 } 00:23:58.492 EOF 00:23:58.492 )") 00:23:58.492 11:47:27 -- nvmf/common.sh@542 -- # cat 00:23:58.492 11:47:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.492 11:47:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.492 { 00:23:58.492 "params": { 00:23:58.492 "name": "Nvme$subsystem", 00:23:58.492 "trtype": "$TEST_TRANSPORT", 00:23:58.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.492 "adrfam": "ipv4", 00:23:58.492 "trsvcid": "$NVMF_PORT", 00:23:58.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.492 "hdgst": ${hdgst:-false}, 00:23:58.492 "ddgst": ${ddgst:-false} 00:23:58.492 }, 00:23:58.492 "method": "bdev_nvme_attach_controller" 00:23:58.492 } 00:23:58.492 EOF 00:23:58.492 )") 00:23:58.492 11:47:27 -- nvmf/common.sh@542 -- # cat 00:23:58.492 11:47:27 -- nvmf/common.sh@544 -- # jq . 00:23:58.492 11:47:27 -- nvmf/common.sh@545 -- # IFS=, 00:23:58.492 11:47:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:58.492 "params": { 00:23:58.492 "name": "Nvme1", 00:23:58.492 "trtype": "rdma", 00:23:58.492 "traddr": "192.168.100.8", 00:23:58.492 "adrfam": "ipv4", 00:23:58.492 "trsvcid": "4420", 00:23:58.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.492 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.492 "hdgst": false, 00:23:58.492 "ddgst": false 00:23:58.492 }, 00:23:58.492 "method": "bdev_nvme_attach_controller" 00:23:58.492 },{ 00:23:58.492 "params": { 00:23:58.492 "name": "Nvme2", 00:23:58.492 "trtype": "rdma", 00:23:58.492 "traddr": "192.168.100.8", 00:23:58.492 "adrfam": "ipv4", 00:23:58.492 "trsvcid": "4420", 00:23:58.492 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:58.492 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:58.492 "hdgst": false, 00:23:58.492 "ddgst": false 00:23:58.492 }, 00:23:58.492 "method": "bdev_nvme_attach_controller" 00:23:58.492 },{ 00:23:58.492 "params": { 00:23:58.492 "name": "Nvme3", 00:23:58.492 "trtype": "rdma", 00:23:58.492 "traddr": "192.168.100.8", 00:23:58.492 "adrfam": "ipv4", 00:23:58.492 "trsvcid": "4420", 00:23:58.492 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:58.492 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:58.492 "hdgst": false, 00:23:58.492 "ddgst": false 00:23:58.492 }, 00:23:58.492 "method": "bdev_nvme_attach_controller" 00:23:58.492 },{ 00:23:58.492 "params": { 00:23:58.492 "name": "Nvme4", 00:23:58.492 "trtype": "rdma", 00:23:58.492 "traddr": "192.168.100.8", 00:23:58.492 "adrfam": "ipv4", 00:23:58.492 "trsvcid": "4420", 00:23:58.492 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:58.492 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:58.492 "hdgst": false, 00:23:58.492 "ddgst": false 00:23:58.492 }, 00:23:58.492 "method": "bdev_nvme_attach_controller" 00:23:58.492 },{ 00:23:58.492 "params": { 00:23:58.492 "name": "Nvme5", 00:23:58.492 "trtype": "rdma", 00:23:58.492 "traddr": "192.168.100.8", 00:23:58.492 "adrfam": "ipv4", 00:23:58.492 "trsvcid": "4420", 00:23:58.492 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:58.492 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:58.492 "hdgst": false, 00:23:58.492 "ddgst": false 00:23:58.492 }, 00:23:58.492 "method": "bdev_nvme_attach_controller" 00:23:58.492 },{ 00:23:58.492 "params": { 00:23:58.492 "name": "Nvme6", 00:23:58.492 "trtype": "rdma", 00:23:58.492 "traddr": "192.168.100.8", 00:23:58.492 "adrfam": "ipv4", 00:23:58.492 "trsvcid": "4420", 00:23:58.492 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:58.492 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:58.492 "hdgst": false, 00:23:58.492 "ddgst": false 00:23:58.492 }, 00:23:58.492 "method": "bdev_nvme_attach_controller" 00:23:58.492 },{ 00:23:58.492 "params": { 00:23:58.492 "name": "Nvme7", 00:23:58.492 "trtype": "rdma", 00:23:58.492 "traddr": "192.168.100.8", 00:23:58.492 "adrfam": "ipv4", 00:23:58.492 "trsvcid": "4420", 00:23:58.492 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:58.492 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:58.492 "hdgst": false, 00:23:58.492 "ddgst": false 00:23:58.492 }, 00:23:58.492 "method": "bdev_nvme_attach_controller" 00:23:58.492 },{ 00:23:58.492 "params": { 00:23:58.492 "name": "Nvme8", 00:23:58.492 "trtype": "rdma", 00:23:58.492 "traddr": "192.168.100.8", 00:23:58.492 "adrfam": "ipv4", 00:23:58.492 "trsvcid": "4420", 00:23:58.492 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:58.492 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:58.492 "hdgst": false, 00:23:58.492 "ddgst": false 00:23:58.492 }, 00:23:58.492 "method": "bdev_nvme_attach_controller" 00:23:58.492 },{ 00:23:58.492 "params": { 00:23:58.492 "name": "Nvme9", 00:23:58.492 "trtype": "rdma", 00:23:58.492 "traddr": "192.168.100.8", 00:23:58.492 "adrfam": "ipv4", 00:23:58.492 "trsvcid": "4420", 00:23:58.492 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:58.492 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:58.492 "hdgst": false, 00:23:58.492 "ddgst": false 00:23:58.492 }, 00:23:58.492 "method": "bdev_nvme_attach_controller" 00:23:58.492 },{ 00:23:58.492 "params": { 00:23:58.492 "name": "Nvme10", 00:23:58.492 "trtype": "rdma", 00:23:58.492 "traddr": "192.168.100.8", 00:23:58.492 "adrfam": "ipv4", 00:23:58.492 "trsvcid": "4420", 00:23:58.492 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:58.492 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:58.492 "hdgst": false, 00:23:58.492 "ddgst": false 00:23:58.492 }, 00:23:58.492 "method": "bdev_nvme_attach_controller" 00:23:58.492 }' 00:23:58.492 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.492 [2024-07-21 11:47:27.908940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.748 [2024-07-21 11:47:27.945146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.678 Running I/O for 10 seconds... 00:24:00.244 11:47:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:00.244 11:47:29 -- common/autotest_common.sh@852 -- # return 0 00:24:00.244 11:47:29 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:00.244 11:47:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:00.244 11:47:29 -- common/autotest_common.sh@10 -- # set +x 00:24:00.244 11:47:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:00.244 11:47:29 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:00.244 11:47:29 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:00.244 11:47:29 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:00.244 11:47:29 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:00.244 11:47:29 -- target/shutdown.sh@57 -- # local ret=1 00:24:00.244 11:47:29 -- target/shutdown.sh@58 -- # local i 00:24:00.244 11:47:29 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:00.244 11:47:29 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:00.244 11:47:29 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:00.244 11:47:29 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:00.244 11:47:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:00.244 11:47:29 -- common/autotest_common.sh@10 -- # set +x 00:24:00.244 11:47:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:00.244 11:47:29 -- target/shutdown.sh@60 -- # read_io_count=461 00:24:00.244 11:47:29 -- target/shutdown.sh@63 -- # '[' 461 -ge 100 ']' 00:24:00.244 11:47:29 -- target/shutdown.sh@64 -- # ret=0 00:24:00.244 11:47:29 -- target/shutdown.sh@65 -- # break 00:24:00.244 11:47:29 -- target/shutdown.sh@69 -- # return 0 00:24:00.244 11:47:29 -- target/shutdown.sh@134 -- # killprocess 2443706 00:24:00.244 11:47:29 -- common/autotest_common.sh@926 -- # '[' -z 2443706 ']' 00:24:00.244 11:47:29 -- common/autotest_common.sh@930 -- # kill -0 2443706 00:24:00.244 11:47:29 -- common/autotest_common.sh@931 -- # uname 00:24:00.244 11:47:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:00.244 11:47:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2443706 00:24:00.244 11:47:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:00.244 11:47:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:00.244 11:47:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2443706' 00:24:00.244 killing process with pid 2443706 00:24:00.244 11:47:29 -- common/autotest_common.sh@945 -- # kill 2443706 00:24:00.244 11:47:29 -- common/autotest_common.sh@950 -- # wait 2443706 00:24:00.810 11:47:30 -- target/shutdown.sh@135 -- # nvmfpid= 00:24:00.810 11:47:30 -- target/shutdown.sh@138 -- # sleep 1 00:24:01.385 [2024-07-21 11:47:30.654397] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257100 was disconnected and freed. reset controller. 00:24:01.385 [2024-07-21 11:47:30.654575] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.386 [2024-07-21 11:47:30.656881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.386 [2024-07-21 11:47:30.659113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.386 [2024-07-21 11:47:30.659171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.659208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.386 [2024-07-21 11:47:30.659240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.659273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.386 [2024-07-21 11:47:30.659305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.659338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.386 [2024-07-21 11:47:30.659369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.661913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.386 [2024-07-21 11:47:30.661956] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:01.386 [2024-07-21 11:47:30.662072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.386 [2024-07-21 11:47:30.662108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.662141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.386 [2024-07-21 11:47:30.662182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.662216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.386 [2024-07-21 11:47:30.662247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.662280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.386 [2024-07-21 11:47:30.662311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.664470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.386 [2024-07-21 11:47:30.664509] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:01.386 [2024-07-21 11:47:30.664855] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:01.386 [2024-07-21 11:47:30.664870] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:01.386 [2024-07-21 11:47:30.664884] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:24:01.386 [2024-07-21 11:47:30.669074] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.386 [2024-07-21 11:47:30.669110] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.386 [2024-07-21 11:47:30.679114] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.386 [2024-07-21 11:47:30.679152] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.386 [2024-07-21 11:47:30.689137] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.386 [2024-07-21 11:47:30.689166] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.386 [2024-07-21 11:47:30.699162] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.386 [2024-07-21 11:47:30.699192] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.386 [2024-07-21 11:47:30.704490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183200 00:24:01.386 [2024-07-21 11:47:30.704502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183900 00:24:01.386 [2024-07-21 11:47:30.704555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183200 00:24:01.386 [2024-07-21 11:47:30.704577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183900 00:24:01.386 [2024-07-21 11:47:30.704600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184100 00:24:01.386 [2024-07-21 11:47:30.704628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184100 00:24:01.386 [2024-07-21 11:47:30.704649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184100 00:24:01.386 [2024-07-21 11:47:30.704671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184100 00:24:01.386 [2024-07-21 11:47:30.704694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183200 00:24:01.386 [2024-07-21 11:47:30.704716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183900 00:24:01.386 [2024-07-21 11:47:30.704737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183200 00:24:01.386 [2024-07-21 11:47:30.704759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183200 00:24:01.386 [2024-07-21 11:47:30.704781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183200 00:24:01.386 [2024-07-21 11:47:30.704803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183200 00:24:01.386 [2024-07-21 11:47:30.704824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183200 00:24:01.386 [2024-07-21 11:47:30.704846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183200 00:24:01.386 [2024-07-21 11:47:30.704869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183200 00:24:01.386 [2024-07-21 11:47:30.704891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x183900 00:24:01.386 [2024-07-21 11:47:30.704912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183200 00:24:01.386 [2024-07-21 11:47:30.704933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183200 00:24:01.386 [2024-07-21 11:47:30.704955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184100 00:24:01.386 [2024-07-21 11:47:30.704976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.704988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183200 00:24:01.386 [2024-07-21 11:47:30.704997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.705009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183900 00:24:01.386 [2024-07-21 11:47:30.705018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.705030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184100 00:24:01.386 [2024-07-21 11:47:30.705039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.386 [2024-07-21 11:47:30.705051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183200 00:24:01.387 [2024-07-21 11:47:30.705061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183200 00:24:01.387 [2024-07-21 11:47:30.705083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183200 00:24:01.387 [2024-07-21 11:47:30.705105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184100 00:24:01.387 [2024-07-21 11:47:30.705127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184100 00:24:01.387 [2024-07-21 11:47:30.705148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183200 00:24:01.387 [2024-07-21 11:47:30.705170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183200 00:24:01.387 [2024-07-21 11:47:30.705191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184100 00:24:01.387 [2024-07-21 11:47:30.705212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183200 00:24:01.387 [2024-07-21 11:47:30.705233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183200 00:24:01.387 [2024-07-21 11:47:30.705255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183200 00:24:01.387 [2024-07-21 11:47:30.705276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183200 00:24:01.387 [2024-07-21 11:47:30.705298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183200 00:24:01.387 [2024-07-21 11:47:30.705319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184100 00:24:01.387 [2024-07-21 11:47:30.705340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183900 00:24:01.387 [2024-07-21 11:47:30.705362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183200 00:24:01.387 [2024-07-21 11:47:30.705383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183200 00:24:01.387 [2024-07-21 11:47:30.705405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183200 00:24:01.387 [2024-07-21 11:47:30.705426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183900 00:24:01.387 [2024-07-21 11:47:30.705447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124aa000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b7de000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b7ff000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012780000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d419000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3f8000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e55f000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000112bf000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001129e000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001231e000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000122fd000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011700000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000137df000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000137be000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001379d000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001377c000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013674000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013653000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013632000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013611000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.387 [2024-07-21 11:47:30.705908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011763000 len:0x10000 key:0x184400 00:24:01.387 [2024-07-21 11:47:30.705917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.708553] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:01.388 [2024-07-21 11:47:30.710388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.710408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:16804 cdw0:559a51f0 sqhd:d00c p:1 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.710418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.710427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:16804 cdw0:559a51f0 sqhd:d00c p:1 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.710453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.710462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:16804 cdw0:559a51f0 sqhd:d00c p:1 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.710472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.710481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:16804 cdw0:559a51f0 sqhd:d00c p:1 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.712283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.388 [2024-07-21 11:47:30.712326] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:01.388 [2024-07-21 11:47:30.712378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.712412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.712444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.712484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.712493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.712502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.712514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.712523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.715098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.388 [2024-07-21 11:47:30.715139] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:01.388 [2024-07-21 11:47:30.715189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.715222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.715254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.715287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.715319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.715350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.715383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.715414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.717859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.388 [2024-07-21 11:47:30.717900] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:01.388 [2024-07-21 11:47:30.717948] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.388 [2024-07-21 11:47:30.717992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.718023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.718056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.718088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.718120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.718151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.718184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.718214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.720702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.388 [2024-07-21 11:47:30.720742] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:01.388 [2024-07-21 11:47:30.720793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.720834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.720868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.720899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.720931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.720962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.720995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.721026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.723257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.388 [2024-07-21 11:47:30.723308] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:01.388 [2024-07-21 11:47:30.723330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.723343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.723357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.723369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.723382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.723395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.723408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.723420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.725907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.388 [2024-07-21 11:47:30.725947] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:01.388 [2024-07-21 11:47:30.725997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.726030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.726062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.726093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.726125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.726157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.726196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.388 [2024-07-21 11:47:30.726228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31030 cdw0:559a51f0 sqhd:5600 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.728531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.388 [2024-07-21 11:47:30.728571] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:01.388 [2024-07-21 11:47:30.728869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000702f180 len:0x10000 key:0x183600 00:24:01.388 [2024-07-21 11:47:30.728886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.728906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071cfe80 len:0x10000 key:0x183600 00:24:01.388 [2024-07-21 11:47:30.728920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.728938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e2eac0 len:0x10000 key:0x183f00 00:24:01.388 [2024-07-21 11:47:30.728951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.728968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000717fc00 len:0x10000 key:0x183600 00:24:01.388 [2024-07-21 11:47:30.728982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.728999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071afd80 len:0x10000 key:0x183600 00:24:01.388 [2024-07-21 11:47:30.729012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.729030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002774c0 len:0x10000 key:0x183d00 00:24:01.388 [2024-07-21 11:47:30.729043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.388 [2024-07-21 11:47:30.729060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000713fa00 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.729073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000714fa80 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.729104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070ff800 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.729134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002372c0 len:0x10000 key:0x183d00 00:24:01.389 [2024-07-21 11:47:30.729171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000718fc80 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.729202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000708f480 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.729232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b22f780 len:0x10000 key:0x184400 00:24:01.389 [2024-07-21 11:47:30.729262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b23f800 len:0x10000 key:0x184400 00:24:01.389 [2024-07-21 11:47:30.729293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e4ebc0 len:0x10000 key:0x183f00 00:24:01.389 [2024-07-21 11:47:30.729323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071bfe00 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.729354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2efd80 len:0x10000 key:0x184400 00:24:01.389 [2024-07-21 11:47:30.729385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000710f880 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.729415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2dfd00 len:0x10000 key:0x184400 00:24:01.389 [2024-07-21 11:47:30.729445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000703f200 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.729475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b27fa00 len:0x10000 key:0x184400 00:24:01.389 [2024-07-21 11:47:30.729506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000712f980 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.729538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b24f880 len:0x10000 key:0x184400 00:24:01.389 [2024-07-21 11:47:30.729568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000701f100 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.729599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070cf680 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.729690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000704f280 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.729721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2bfc00 len:0x10000 key:0x184400 00:24:01.389 [2024-07-21 11:47:30.729751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002573c0 len:0x10000 key:0x183d00 00:24:01.389 [2024-07-21 11:47:30.729781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000706f380 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.729812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002b76c0 len:0x10000 key:0x183d00 00:24:01.389 [2024-07-21 11:47:30.729842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b28fa80 len:0x10000 key:0x184400 00:24:01.389 [2024-07-21 11:47:30.729872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000287540 len:0x10000 key:0x183d00 00:24:01.389 [2024-07-21 11:47:30.729902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000716fb80 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.729935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2cfc80 len:0x10000 key:0x184400 00:24:01.389 [2024-07-21 11:47:30.729965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.729982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000719fd00 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.729995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.730013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000707f400 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.730025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.730043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b26f980 len:0x10000 key:0x184400 00:24:01.389 [2024-07-21 11:47:30.730056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.730073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071eff80 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.730086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.730103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000207140 len:0x10000 key:0x183d00 00:24:01.389 [2024-07-21 11:47:30.730116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.730133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002c7740 len:0x10000 key:0x183d00 00:24:01.389 [2024-07-21 11:47:30.730146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.730164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071dff00 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.730176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.730194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000227240 len:0x10000 key:0x183d00 00:24:01.389 [2024-07-21 11:47:30.730207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.730225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070df700 len:0x10000 key:0x183600 00:24:01.389 [2024-07-21 11:47:30.730238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.730255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b29fb00 len:0x10000 key:0x184400 00:24:01.389 [2024-07-21 11:47:30.730270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.730287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e1ea40 len:0x10000 key:0x183f00 00:24:01.389 [2024-07-21 11:47:30.730300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.389 [2024-07-21 11:47:30.730318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000267440 len:0x10000 key:0x183d00 00:24:01.390 [2024-07-21 11:47:30.730330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001169d000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000116be000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000132f9000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011931000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011910000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001275f000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001273e000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001271d000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126fc000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126db000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126ba000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012825000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012804000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000127e3000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000127c2000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000127a1000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012570000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.730885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b5ef000 len:0x10000 key:0x184400 00:24:01.390 [2024-07-21 11:47:30.730898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.734065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008df100 len:0x10000 key:0x183c00 00:24:01.390 [2024-07-21 11:47:30.734115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.734165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000087ee00 len:0x10000 key:0x183c00 00:24:01.390 [2024-07-21 11:47:30.734197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.734248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194ef800 len:0x10000 key:0x182a00 00:24:01.390 [2024-07-21 11:47:30.734281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.734324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001958fd00 len:0x10000 key:0x182a00 00:24:01.390 [2024-07-21 11:47:30.734356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.734399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004bf480 len:0x10000 key:0x183b00 00:24:01.390 [2024-07-21 11:47:30.734431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.734465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000089ef00 len:0x10000 key:0x183c00 00:24:01.390 [2024-07-21 11:47:30.734478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.734495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004df580 len:0x10000 key:0x183b00 00:24:01.390 [2024-07-21 11:47:30.734508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.734526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001953fa80 len:0x10000 key:0x182a00 00:24:01.390 [2024-07-21 11:47:30.734538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.734556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000057fa80 len:0x10000 key:0x183b00 00:24:01.390 [2024-07-21 11:47:30.734568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.734586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001954fb00 len:0x10000 key:0x182a00 00:24:01.390 [2024-07-21 11:47:30.734599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.390 [2024-07-21 11:47:30.734616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000049f380 len:0x10000 key:0x183b00 00:24:01.390 [2024-07-21 11:47:30.734636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.734654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000058fb00 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.734667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.734684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ef600 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.734697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.734717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000046f200 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.734729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.734747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000051f780 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.734760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.734777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000044f100 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.734790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.734807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000059fb80 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.734820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.734838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000053f880 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.734851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.734868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000083ec00 len:0x10000 key:0x183c00 00:24:01.391 [2024-07-21 11:47:30.734882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.734899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008bf000 len:0x10000 key:0x183c00 00:24:01.391 [2024-07-21 11:47:30.734912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.734930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000056fa00 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.734943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.734960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000082eb80 len:0x10000 key:0x183c00 00:24:01.391 [2024-07-21 11:47:30.734973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.734991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004cf500 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.735004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000041ef80 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.735034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000050f700 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.735066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ff680 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.735096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000085ed00 len:0x10000 key:0x183c00 00:24:01.391 [2024-07-21 11:47:30.735127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001955fb80 len:0x10000 key:0x182a00 00:24:01.391 [2024-07-21 11:47:30.735157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005dfd80 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.735187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001956fc00 len:0x10000 key:0x182a00 00:24:01.391 [2024-07-21 11:47:30.735217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005bfc80 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.735247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000080ea80 len:0x10000 key:0x183c00 00:24:01.391 [2024-07-21 11:47:30.735278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000040ef00 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.735309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000086ed80 len:0x10000 key:0x183c00 00:24:01.391 [2024-07-21 11:47:30.735339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005cfd00 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.735370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000043f080 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.735401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195dff80 len:0x10000 key:0x182a00 00:24:01.391 [2024-07-21 11:47:30.735432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000088ee80 len:0x10000 key:0x183c00 00:24:01.391 [2024-07-21 11:47:30.735462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005afc00 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.735492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000054f900 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.735523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001957fc80 len:0x10000 key:0x182a00 00:24:01.391 [2024-07-21 11:47:30.735554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004af400 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.735584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195f0000 len:0x10000 key:0x182a00 00:24:01.391 [2024-07-21 11:47:30.735614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195afe00 len:0x10000 key:0x182a00 00:24:01.391 [2024-07-21 11:47:30.735648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000042f000 len:0x10000 key:0x183b00 00:24:01.391 [2024-07-21 11:47:30.735678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000084ec80 len:0x10000 key:0x183c00 00:24:01.391 [2024-07-21 11:47:30.735708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000081eb00 len:0x10000 key:0x183c00 00:24:01.391 [2024-07-21 11:47:30.735738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195cff00 len:0x10000 key:0x182a00 00:24:01.391 [2024-07-21 11:47:30.735770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001952fa00 len:0x10000 key:0x182a00 00:24:01.391 [2024-07-21 11:47:30.735801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.391 [2024-07-21 11:47:30.735818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c525000 len:0x10000 key:0x184400 00:24:01.391 [2024-07-21 11:47:30.735831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.735849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c546000 len:0x10000 key:0x184400 00:24:01.392 [2024-07-21 11:47:30.735862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.735880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ce000 len:0x10000 key:0x184400 00:24:01.392 [2024-07-21 11:47:30.735893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.735911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ef000 len:0x10000 key:0x184400 00:24:01.392 [2024-07-21 11:47:30.735924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.735942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000114f0000 len:0x10000 key:0x184400 00:24:01.392 [2024-07-21 11:47:30.735955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.735973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da07000 len:0x10000 key:0x184400 00:24:01.392 [2024-07-21 11:47:30.735986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.736004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001294e000 len:0x10000 key:0x184400 00:24:01.392 [2024-07-21 11:47:30.736017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.736035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001292d000 len:0x10000 key:0x184400 00:24:01.392 [2024-07-21 11:47:30.736048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.736066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001290c000 len:0x10000 key:0x184400 00:24:01.392 [2024-07-21 11:47:30.736079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.736099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128eb000 len:0x10000 key:0x184400 00:24:01.392 [2024-07-21 11:47:30.736112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.736130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128ca000 len:0x10000 key:0x184400 00:24:01.392 [2024-07-21 11:47:30.736142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.736160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012a35000 len:0x10000 key:0x184400 00:24:01.392 [2024-07-21 11:47:30.736173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.736192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012a14000 len:0x10000 key:0x184400 00:24:01.392 [2024-07-21 11:47:30.736204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.736223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000129f3000 len:0x10000 key:0x184400 00:24:01.392 [2024-07-21 11:47:30.736236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.736254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000129d2000 len:0x10000 key:0x184400 00:24:01.392 [2024-07-21 11:47:30.736267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.739469] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256c80 was disconnected and freed. reset controller. 00:24:01.392 [2024-07-21 11:47:30.739516] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.392 [2024-07-21 11:47:30.739559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001997fc80 len:0x10000 key:0x182c00 00:24:01.392 [2024-07-21 11:47:30.739592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.739653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001969fb80 len:0x10000 key:0x182b00 00:24:01.392 [2024-07-21 11:47:30.739695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.739713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001989f580 len:0x10000 key:0x182c00 00:24:01.392 [2024-07-21 11:47:30.739726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.739743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001948f500 len:0x10000 key:0x182a00 00:24:01.392 [2024-07-21 11:47:30.739756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.739773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001983f280 len:0x10000 key:0x182c00 00:24:01.392 [2024-07-21 11:47:30.739789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.739807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198bf680 len:0x10000 key:0x182c00 00:24:01.392 [2024-07-21 11:47:30.739820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.739838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199afe00 len:0x10000 key:0x182c00 00:24:01.392 [2024-07-21 11:47:30.739851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.739868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001996fc00 len:0x10000 key:0x182c00 00:24:01.392 [2024-07-21 11:47:30.739881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.739898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199bfe80 len:0x10000 key:0x182c00 00:24:01.392 [2024-07-21 11:47:30.739911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.739928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001964f900 len:0x10000 key:0x182b00 00:24:01.392 [2024-07-21 11:47:30.739941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.739959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198af600 len:0x10000 key:0x182c00 00:24:01.392 [2024-07-21 11:47:30.739972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.739989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001966fa00 len:0x10000 key:0x182b00 00:24:01.392 [2024-07-21 11:47:30.740002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.740020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196cfd00 len:0x10000 key:0x182b00 00:24:01.392 [2024-07-21 11:47:30.740033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.740050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001943f280 len:0x10000 key:0x182a00 00:24:01.392 [2024-07-21 11:47:30.740063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.740080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198cf700 len:0x10000 key:0x182c00 00:24:01.392 [2024-07-21 11:47:30.740093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.740110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bf0000 len:0x10000 key:0x182d00 00:24:01.392 [2024-07-21 11:47:30.740123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.740142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196dfd80 len:0x10000 key:0x182b00 00:24:01.392 [2024-07-21 11:47:30.740155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.740173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001947f480 len:0x10000 key:0x182a00 00:24:01.392 [2024-07-21 11:47:30.740186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.740203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001995fb80 len:0x10000 key:0x182c00 00:24:01.392 [2024-07-21 11:47:30.740216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.740233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001981f180 len:0x10000 key:0x182c00 00:24:01.392 [2024-07-21 11:47:30.740246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.740264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001990f900 len:0x10000 key:0x182c00 00:24:01.392 [2024-07-21 11:47:30.740277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.740295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bcff00 len:0x10000 key:0x182d00 00:24:01.392 [2024-07-21 11:47:30.740308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.740325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001941f180 len:0x10000 key:0x182a00 00:24:01.392 [2024-07-21 11:47:30.740338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.740355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194df780 len:0x10000 key:0x182a00 00:24:01.392 [2024-07-21 11:47:30.740368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.392 [2024-07-21 11:47:30.740385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001965f980 len:0x10000 key:0x182b00 00:24:01.393 [2024-07-21 11:47:30.740398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bbfe80 len:0x10000 key:0x182d00 00:24:01.393 [2024-07-21 11:47:30.740428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001960f700 len:0x10000 key:0x182b00 00:24:01.393 [2024-07-21 11:47:30.740459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ef800 len:0x10000 key:0x182c00 00:24:01.393 [2024-07-21 11:47:30.740491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001988f500 len:0x10000 key:0x182c00 00:24:01.393 [2024-07-21 11:47:30.740521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001946f400 len:0x10000 key:0x182a00 00:24:01.393 [2024-07-21 11:47:30.740551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001985f380 len:0x10000 key:0x182c00 00:24:01.393 [2024-07-21 11:47:30.740581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001987f480 len:0x10000 key:0x182c00 00:24:01.393 [2024-07-21 11:47:30.740612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194bf680 len:0x10000 key:0x182a00 00:24:01.393 [2024-07-21 11:47:30.740646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001949f580 len:0x10000 key:0x182a00 00:24:01.393 [2024-07-21 11:47:30.740676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199f0000 len:0x10000 key:0x182c00 00:24:01.393 [2024-07-21 11:47:30.740706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bdff80 len:0x10000 key:0x182d00 00:24:01.393 [2024-07-21 11:47:30.740737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196afc00 len:0x10000 key:0x182b00 00:24:01.393 [2024-07-21 11:47:30.740767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001994fb00 len:0x10000 key:0x182c00 00:24:01.393 [2024-07-21 11:47:30.740797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198df780 len:0x10000 key:0x182c00 00:24:01.393 [2024-07-21 11:47:30.740830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001993fa80 len:0x10000 key:0x182c00 00:24:01.393 [2024-07-21 11:47:30.740862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001968fb00 len:0x10000 key:0x182b00 00:24:01.393 [2024-07-21 11:47:30.740893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfc80 len:0x10000 key:0x182b00 00:24:01.393 [2024-07-21 11:47:30.740923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001963f880 len:0x10000 key:0x182b00 00:24:01.393 [2024-07-21 11:47:30.740954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.740971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001961f780 len:0x10000 key:0x182b00 00:24:01.393 [2024-07-21 11:47:30.740984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967fa80 len:0x10000 key:0x182b00 00:24:01.393 [2024-07-21 11:47:30.741014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001986f400 len:0x10000 key:0x182c00 00:24:01.393 [2024-07-21 11:47:30.741045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c81c000 len:0x10000 key:0x184400 00:24:01.393 [2024-07-21 11:47:30.741075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c83d000 len:0x10000 key:0x184400 00:24:01.393 [2024-07-21 11:47:30.741107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbd5000 len:0x10000 key:0x184400 00:24:01.393 [2024-07-21 11:47:30.741138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbb4000 len:0x10000 key:0x184400 00:24:01.393 [2024-07-21 11:47:30.741172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db93000 len:0x10000 key:0x184400 00:24:01.393 [2024-07-21 11:47:30.741204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db72000 len:0x10000 key:0x184400 00:24:01.393 [2024-07-21 11:47:30.741235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012b5e000 len:0x10000 key:0x184400 00:24:01.393 [2024-07-21 11:47:30.741266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012b3d000 len:0x10000 key:0x184400 00:24:01.393 [2024-07-21 11:47:30.741298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012b1c000 len:0x10000 key:0x184400 00:24:01.393 [2024-07-21 11:47:30.741329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012afb000 len:0x10000 key:0x184400 00:24:01.393 [2024-07-21 11:47:30.741361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ada000 len:0x10000 key:0x184400 00:24:01.393 [2024-07-21 11:47:30.741392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c45000 len:0x10000 key:0x184400 00:24:01.393 [2024-07-21 11:47:30.741423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c24000 len:0x10000 key:0x184400 00:24:01.393 [2024-07-21 11:47:30.741454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c03000 len:0x10000 key:0x184400 00:24:01.393 [2024-07-21 11:47:30.741485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012be2000 len:0x10000 key:0x184400 00:24:01.393 [2024-07-21 11:47:30.741516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012bc1000 len:0x10000 key:0x184400 00:24:01.393 [2024-07-21 11:47:30.741549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b7bd000 len:0x10000 key:0x184400 00:24:01.393 [2024-07-21 11:47:30.741580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.393 [2024-07-21 11:47:30.741598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b79c000 len:0x10000 key:0x184400 00:24:01.394 [2024-07-21 11:47:30.741611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.744783] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256a40 was disconnected and freed. reset controller. 00:24:01.394 [2024-07-21 11:47:30.744803] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.394 [2024-07-21 11:47:30.744822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fc80 len:0x10000 key:0x182e00 00:24:01.394 [2024-07-21 11:47:30.744836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.744857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0f100 len:0x10000 key:0x182e00 00:24:01.394 [2024-07-21 11:47:30.744870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.744888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c3f280 len:0x10000 key:0x182e00 00:24:01.394 [2024-07-21 11:47:30.744901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.744919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cff880 len:0x10000 key:0x182e00 00:24:01.394 [2024-07-21 11:47:30.744931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.744949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f0f900 len:0x10000 key:0x182f00 00:24:01.394 [2024-07-21 11:47:30.744962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.744980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ddff80 len:0x10000 key:0x182e00 00:24:01.394 [2024-07-21 11:47:30.744993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a9fb80 len:0x10000 key:0x182d00 00:24:01.394 [2024-07-21 11:47:30.745023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ebf680 len:0x10000 key:0x182f00 00:24:01.394 [2024-07-21 11:47:30.745057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fafe00 len:0x10000 key:0x182f00 00:24:01.394 [2024-07-21 11:47:30.745088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a0f700 len:0x10000 key:0x182d00 00:24:01.394 [2024-07-21 11:47:30.745119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e9f580 len:0x10000 key:0x182f00 00:24:01.394 [2024-07-21 11:47:30.745149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eef800 len:0x10000 key:0x182f00 00:24:01.394 [2024-07-21 11:47:30.745180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d3fa80 len:0x10000 key:0x182e00 00:24:01.394 [2024-07-21 11:47:30.745211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d0f900 len:0x10000 key:0x182e00 00:24:01.394 [2024-07-21 11:47:30.745241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c8f500 len:0x10000 key:0x182e00 00:24:01.394 [2024-07-21 11:47:30.745272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6f400 len:0x10000 key:0x182e00 00:24:01.394 [2024-07-21 11:47:30.745303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f5fb80 len:0x10000 key:0x182f00 00:24:01.394 [2024-07-21 11:47:30.745333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a5f980 len:0x10000 key:0x182d00 00:24:01.394 [2024-07-21 11:47:30.745363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c9f580 len:0x10000 key:0x182e00 00:24:01.394 [2024-07-21 11:47:30.745396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dcff00 len:0x10000 key:0x182e00 00:24:01.394 [2024-07-21 11:47:30.745427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a6fa00 len:0x10000 key:0x182d00 00:24:01.394 [2024-07-21 11:47:30.745457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f6fc00 len:0x10000 key:0x182f00 00:24:01.394 [2024-07-21 11:47:30.745487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f1f980 len:0x10000 key:0x182f00 00:24:01.394 [2024-07-21 11:47:30.745518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fd00 len:0x10000 key:0x182e00 00:24:01.394 [2024-07-21 11:47:30.745548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fdff80 len:0x10000 key:0x182f00 00:24:01.394 [2024-07-21 11:47:30.745579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5fb80 len:0x10000 key:0x182e00 00:24:01.394 [2024-07-21 11:47:30.745609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019edf780 len:0x10000 key:0x182f00 00:24:01.394 [2024-07-21 11:47:30.745656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2fa00 len:0x10000 key:0x182e00 00:24:01.394 [2024-07-21 11:47:30.745687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf700 len:0x10000 key:0x182e00 00:24:01.394 [2024-07-21 11:47:30.745717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f3fa80 len:0x10000 key:0x182f00 00:24:01.394 [2024-07-21 11:47:30.745750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f2fa00 len:0x10000 key:0x182f00 00:24:01.394 [2024-07-21 11:47:30.745781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a1f780 len:0x10000 key:0x182d00 00:24:01.394 [2024-07-21 11:47:30.745811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.394 [2024-07-21 11:47:30.745829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dafe00 len:0x10000 key:0x182e00 00:24:01.395 [2024-07-21 11:47:30.745842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.745860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f9fd80 len:0x10000 key:0x182f00 00:24:01.395 [2024-07-21 11:47:30.745873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.745890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dbfe80 len:0x10000 key:0x182e00 00:24:01.395 [2024-07-21 11:47:30.745903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.745920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d1f980 len:0x10000 key:0x182e00 00:24:01.395 [2024-07-21 11:47:30.745933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.745951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f19e000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.745963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.745982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1bf000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.745995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d6e000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d4d000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d2c000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d0b000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012cea000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010179000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001019a000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101bb000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101dc000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101fd000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001021e000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001023f000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be2f000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be0e000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b949000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b928000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b907000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8e6000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8d6000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8f7000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f918000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ebc000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e9b000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e7a000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8c5000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.746797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8a4000 len:0x10000 key:0x184400 00:24:01.395 [2024-07-21 11:47:30.746812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.750013] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256800 was disconnected and freed. reset controller. 00:24:01.395 [2024-07-21 11:47:30.750057] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.395 [2024-07-21 11:47:30.750099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f380 len:0x10000 key:0x182f00 00:24:01.395 [2024-07-21 11:47:30.750132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.750194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x183300 00:24:01.395 [2024-07-21 11:47:30.750228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.750271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x183300 00:24:01.395 [2024-07-21 11:47:30.750304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.750347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3cff00 len:0x10000 key:0x183300 00:24:01.395 [2024-07-21 11:47:30.750379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.750422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x183100 00:24:01.395 [2024-07-21 11:47:30.750454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.750497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a02f800 len:0x10000 key:0x183000 00:24:01.395 [2024-07-21 11:47:30.750529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.395 [2024-07-21 11:47:30.750572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0dfd80 len:0x10000 key:0x183000 00:24:01.395 [2024-07-21 11:47:30.750604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.750657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x183100 00:24:01.396 [2024-07-21 11:47:30.750690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.750733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x183300 00:24:01.396 [2024-07-21 11:47:30.750763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.750780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a04f900 len:0x10000 key:0x183000 00:24:01.396 [2024-07-21 11:47:30.750793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.750813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a56fc00 len:0x10000 key:0x183100 00:24:01.396 [2024-07-21 11:47:30.750826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.750844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x183100 00:24:01.396 [2024-07-21 11:47:30.750857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.750874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e1f180 len:0x10000 key:0x182f00 00:24:01.396 [2024-07-21 11:47:30.750887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.750905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3dff80 len:0x10000 key:0x183300 00:24:01.396 [2024-07-21 11:47:30.750918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.750935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35fb80 len:0x10000 key:0x183300 00:24:01.396 [2024-07-21 11:47:30.750948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.750966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x183300 00:24:01.396 [2024-07-21 11:47:30.750979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.750996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23f280 len:0x10000 key:0x183300 00:24:01.396 [2024-07-21 11:47:30.751009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a09fb80 len:0x10000 key:0x183000 00:24:01.396 [2024-07-21 11:47:30.751040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x183300 00:24:01.396 [2024-07-21 11:47:30.751070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a01f780 len:0x10000 key:0x183000 00:24:01.396 [2024-07-21 11:47:30.751100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0afc00 len:0x10000 key:0x183000 00:24:01.396 [2024-07-21 11:47:30.751130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x183300 00:24:01.396 [2024-07-21 11:47:30.751162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5f0000 len:0x10000 key:0x183100 00:24:01.396 [2024-07-21 11:47:30.751192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6f400 len:0x10000 key:0x182f00 00:24:01.396 [2024-07-21 11:47:30.751223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf680 len:0x10000 key:0x183300 00:24:01.396 [2024-07-21 11:47:30.751253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f280 len:0x10000 key:0x182f00 00:24:01.396 [2024-07-21 11:47:30.751283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x183100 00:24:01.396 [2024-07-21 11:47:30.751313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f100 len:0x10000 key:0x182f00 00:24:01.396 [2024-07-21 11:47:30.751343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x183300 00:24:01.396 [2024-07-21 11:47:30.751374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x183300 00:24:01.396 [2024-07-21 11:47:30.751404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x183300 00:24:01.396 [2024-07-21 11:47:30.751434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a05f980 len:0x10000 key:0x183000 00:24:01.396 [2024-07-21 11:47:30.751464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e8f500 len:0x10000 key:0x182f00 00:24:01.396 [2024-07-21 11:47:30.751496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a27f480 len:0x10000 key:0x183300 00:24:01.396 [2024-07-21 11:47:30.751527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a00f700 len:0x10000 key:0x183000 00:24:01.396 [2024-07-21 11:47:30.751557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3f0000 len:0x10000 key:0x183300 00:24:01.396 [2024-07-21 11:47:30.751587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5be000 len:0x10000 key:0x184400 00:24:01.396 [2024-07-21 11:47:30.751618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5df000 len:0x10000 key:0x184400 00:24:01.396 [2024-07-21 11:47:30.751653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f3c000 len:0x10000 key:0x184400 00:24:01.396 [2024-07-21 11:47:30.751685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f1b000 len:0x10000 key:0x184400 00:24:01.396 [2024-07-21 11:47:30.751716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012efa000 len:0x10000 key:0x184400 00:24:01.396 [2024-07-21 11:47:30.751747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ed9000 len:0x10000 key:0x184400 00:24:01.396 [2024-07-21 11:47:30.751778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012eb8000 len:0x10000 key:0x184400 00:24:01.396 [2024-07-21 11:47:30.751810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e97000 len:0x10000 key:0x184400 00:24:01.396 [2024-07-21 11:47:30.751843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010515000 len:0x10000 key:0x184400 00:24:01.396 [2024-07-21 11:47:30.751874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.396 [2024-07-21 11:47:30.751892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104f4000 len:0x10000 key:0x184400 00:24:01.396 [2024-07-21 11:47:30.751905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.751924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104d3000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.751937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.751955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104b2000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.751968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.751985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010491000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.751998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.752016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010470000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.752029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.752047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e76000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.752060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.752078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e55000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.752091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.752109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb59000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.752122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.752141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb38000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.752153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.752172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb17000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.752185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.752205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000baf6000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.752217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.752235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcf6000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.752248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.752266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd17000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.752280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.752298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd38000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.752311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.752329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120cc000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.752342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.752360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120ab000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.752373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.752391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001208a000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.752404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.752422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bad5000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.752435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.752453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bab4000 len:0x10000 key:0x184400 00:24:01.397 [2024-07-21 11:47:30.752466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.755569] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192565c0 was disconnected and freed. reset controller. 00:24:01.397 [2024-07-21 11:47:30.755614] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.397 [2024-07-21 11:47:30.755669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a72fa00 len:0x10000 key:0x183a00 00:24:01.397 [2024-07-21 11:47:30.755702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.755748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183500 00:24:01.397 [2024-07-21 11:47:30.755787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.755831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183500 00:24:01.397 [2024-07-21 11:47:30.755863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.755907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x183a00 00:24:01.397 [2024-07-21 11:47:30.755939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.755982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183500 00:24:01.397 [2024-07-21 11:47:30.756014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.756058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a78fd00 len:0x10000 key:0x183a00 00:24:01.397 [2024-07-21 11:47:30.756090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.756133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a44f900 len:0x10000 key:0x183100 00:24:01.397 [2024-07-21 11:47:30.756165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.756208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183500 00:24:01.397 [2024-07-21 11:47:30.756240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.756283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183500 00:24:01.397 [2024-07-21 11:47:30.756315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.756358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7afe00 len:0x10000 key:0x183a00 00:24:01.397 [2024-07-21 11:47:30.756390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.756433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x183500 00:24:01.397 [2024-07-21 11:47:30.756465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.756508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183500 00:24:01.397 [2024-07-21 11:47:30.756540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.756584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ef800 len:0x10000 key:0x183a00 00:24:01.397 [2024-07-21 11:47:30.756638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.756656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x183a00 00:24:01.397 [2024-07-21 11:47:30.756669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.756686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x183a00 00:24:01.397 [2024-07-21 11:47:30.756699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.756716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183a00 00:24:01.397 [2024-07-21 11:47:30.756729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.756747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183500 00:24:01.397 [2024-07-21 11:47:30.756760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.756777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a40f700 len:0x10000 key:0x183100 00:24:01.397 [2024-07-21 11:47:30.756790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.756807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x183a00 00:24:01.397 [2024-07-21 11:47:30.756820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.397 [2024-07-21 11:47:30.756838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a77fc80 len:0x10000 key:0x183a00 00:24:01.397 [2024-07-21 11:47:30.756851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.756868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a41f780 len:0x10000 key:0x183100 00:24:01.398 [2024-07-21 11:47:30.756881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.756899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183500 00:24:01.398 [2024-07-21 11:47:30.756912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.756929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183500 00:24:01.398 [2024-07-21 11:47:30.756942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.756959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a73fa80 len:0x10000 key:0x183a00 00:24:01.398 [2024-07-21 11:47:30.756973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.756992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x183500 00:24:01.398 [2024-07-21 11:47:30.757005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f900 len:0x10000 key:0x183a00 00:24:01.398 [2024-07-21 11:47:30.757035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183500 00:24:01.398 [2024-07-21 11:47:30.757066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df780 len:0x10000 key:0x183a00 00:24:01.398 [2024-07-21 11:47:30.757096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x183a00 00:24:01.398 [2024-07-21 11:47:30.757126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183500 00:24:01.398 [2024-07-21 11:47:30.757157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183500 00:24:01.398 [2024-07-21 11:47:30.757187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7bfe80 len:0x10000 key:0x183a00 00:24:01.398 [2024-07-21 11:47:30.757218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a75fb80 len:0x10000 key:0x183a00 00:24:01.398 [2024-07-21 11:47:30.757248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x183500 00:24:01.398 [2024-07-21 11:47:30.757279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a76fc00 len:0x10000 key:0x183a00 00:24:01.398 [2024-07-21 11:47:30.757309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6cf700 len:0x10000 key:0x183a00 00:24:01.398 [2024-07-21 11:47:30.757341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9de000 len:0x10000 key:0x184400 00:24:01.398 [2024-07-21 11:47:30.757372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9ff000 len:0x10000 key:0x184400 00:24:01.398 [2024-07-21 11:47:30.757403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001312b000 len:0x10000 key:0x184400 00:24:01.398 [2024-07-21 11:47:30.757434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001310a000 len:0x10000 key:0x184400 00:24:01.398 [2024-07-21 11:47:30.757466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013065000 len:0x10000 key:0x184400 00:24:01.398 [2024-07-21 11:47:30.757498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013044000 len:0x10000 key:0x184400 00:24:01.398 [2024-07-21 11:47:30.757529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013023000 len:0x10000 key:0x184400 00:24:01.398 [2024-07-21 11:47:30.757560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013002000 len:0x10000 key:0x184400 00:24:01.398 [2024-07-21 11:47:30.757591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012fe1000 len:0x10000 key:0x184400 00:24:01.398 [2024-07-21 11:47:30.757623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012fc0000 len:0x10000 key:0x184400 00:24:01.398 [2024-07-21 11:47:30.757660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c24f000 len:0x10000 key:0x184400 00:24:01.398 [2024-07-21 11:47:30.757694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c22e000 len:0x10000 key:0x184400 00:24:01.398 [2024-07-21 11:47:30.757725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c20d000 len:0x10000 key:0x184400 00:24:01.398 [2024-07-21 11:47:30.757757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1ec000 len:0x10000 key:0x184400 00:24:01.398 [2024-07-21 11:47:30.757788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1cb000 len:0x10000 key:0x184400 00:24:01.398 [2024-07-21 11:47:30.757819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1aa000 len:0x10000 key:0x184400 00:24:01.398 [2024-07-21 11:47:30.757850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.398 [2024-07-21 11:47:30.757868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ad000 len:0x10000 key:0x184400 00:24:01.398 [2024-07-21 11:47:30.757881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.757899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bded000 len:0x10000 key:0x184400 00:24:01.399 [2024-07-21 11:47:30.757912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.757931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdcc000 len:0x10000 key:0x184400 00:24:01.399 [2024-07-21 11:47:30.757944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.757962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdab000 len:0x10000 key:0x184400 00:24:01.399 [2024-07-21 11:47:30.757975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.757993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd8a000 len:0x10000 key:0x184400 00:24:01.399 [2024-07-21 11:47:30.758006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.758025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd69000 len:0x10000 key:0x184400 00:24:01.399 [2024-07-21 11:47:30.758039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.758058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd48000 len:0x10000 key:0x184400 00:24:01.399 [2024-07-21 11:47:30.758071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.758089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd27000 len:0x10000 key:0x184400 00:24:01.399 [2024-07-21 11:47:30.758102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.758120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100f5000 len:0x10000 key:0x184400 00:24:01.399 [2024-07-21 11:47:30.758133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.758152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100d4000 len:0x10000 key:0x184400 00:24:01.399 [2024-07-21 11:47:30.758165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.758183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100b3000 len:0x10000 key:0x184400 00:24:01.399 [2024-07-21 11:47:30.758196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.758214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010092000 len:0x10000 key:0x184400 00:24:01.399 [2024-07-21 11:47:30.758227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.761429] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256380 was disconnected and freed. reset controller. 00:24:01.399 [2024-07-21 11:47:30.761476] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.399 [2024-07-21 11:47:30.761519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a80f100 len:0x10000 key:0x183500 00:24:01.399 [2024-07-21 11:47:30.761553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.761601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183400 00:24:01.399 [2024-07-21 11:47:30.761644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.761688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183400 00:24:01.399 [2024-07-21 11:47:30.761721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.761764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183400 00:24:01.399 [2024-07-21 11:47:30.761796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.761847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183700 00:24:01.399 [2024-07-21 11:47:30.761880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.761916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x184300 00:24:01.399 [2024-07-21 11:47:30.761929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.761947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aadfd80 len:0x10000 key:0x184300 00:24:01.399 [2024-07-21 11:47:30.761961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.761978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183700 00:24:01.399 [2024-07-21 11:47:30.761991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183400 00:24:01.399 [2024-07-21 11:47:30.762022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa4f900 len:0x10000 key:0x184300 00:24:01.399 [2024-07-21 11:47:30.762052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183700 00:24:01.399 [2024-07-21 11:47:30.762083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183700 00:24:01.399 [2024-07-21 11:47:30.762113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfe80 len:0x10000 key:0x183400 00:24:01.399 [2024-07-21 11:47:30.762144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183400 00:24:01.399 [2024-07-21 11:47:30.762175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183400 00:24:01.399 [2024-07-21 11:47:30.762205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183400 00:24:01.399 [2024-07-21 11:47:30.762238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183700 00:24:01.399 [2024-07-21 11:47:30.762268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa9fb80 len:0x10000 key:0x184300 00:24:01.399 [2024-07-21 11:47:30.762299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183400 00:24:01.399 [2024-07-21 11:47:30.762329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x184300 00:24:01.399 [2024-07-21 11:47:30.762360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaafc00 len:0x10000 key:0x184300 00:24:01.399 [2024-07-21 11:47:30.762391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183700 00:24:01.399 [2024-07-21 11:47:30.762421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183700 00:24:01.399 [2024-07-21 11:47:30.762451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x183500 00:24:01.399 [2024-07-21 11:47:30.762481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183400 00:24:01.399 [2024-07-21 11:47:30.762512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x183400 00:24:01.399 [2024-07-21 11:47:30.762542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183700 00:24:01.399 [2024-07-21 11:47:30.762574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.399 [2024-07-21 11:47:30.762591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183400 00:24:01.400 [2024-07-21 11:47:30.762604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.762621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183400 00:24:01.400 [2024-07-21 11:47:30.762656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.762674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183700 00:24:01.400 [2024-07-21 11:47:30.762687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.762704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183700 00:24:01.400 [2024-07-21 11:47:30.762717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.762735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa5f980 len:0x10000 key:0x184300 00:24:01.400 [2024-07-21 11:47:30.762748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.762766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f280 len:0x10000 key:0x183500 00:24:01.400 [2024-07-21 11:47:30.762779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.762796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183400 00:24:01.400 [2024-07-21 11:47:30.762809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.762826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa0f700 len:0x10000 key:0x184300 00:24:01.400 [2024-07-21 11:47:30.762839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.762857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fd80 len:0x10000 key:0x183400 00:24:01.400 [2024-07-21 11:47:30.762870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.762887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdfe000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.762900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.762918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe1f000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.762933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.762952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001333b000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.762965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.762983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001331a000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.762996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013275000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013254000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013233000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013212000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131f1000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131d0000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c45f000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c43e000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c41d000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3fc000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3db000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3ba000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7da000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e34000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e13000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012df2000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012dd1000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012db0000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c03f000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c01e000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bffd000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf79000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf58000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.763755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf37000 len:0x10000 key:0x184400 00:24:01.400 [2024-07-21 11:47:30.763768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.766846] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256140 was disconnected and freed. reset controller. 00:24:01.400 [2024-07-21 11:47:30.766891] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.400 [2024-07-21 11:47:30.766933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183800 00:24:01.400 [2024-07-21 11:47:30.766966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.400 [2024-07-21 11:47:30.767029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x183e00 00:24:01.400 [2024-07-21 11:47:30.767062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.767139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.767214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.767289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.767364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.767445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.767515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.767545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.767576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.767606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.767641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.767672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.767702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.767732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.767763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.767793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.767824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.767856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.767887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.767917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.767948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.767979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.767996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.768009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.768026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.768039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.768057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.768070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.768087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.768101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.768118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.768131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.768148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.768161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.768180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.768193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.768210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.768223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.768241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.768253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.768271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.768284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.768302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.768314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.768332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.768345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.768362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.768375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.768393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.768406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.768423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183800 00:24:01.401 [2024-07-21 11:47:30.768436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.768454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.768467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.768484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.768497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.401 [2024-07-21 11:47:30.768514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x183e00 00:24:01.401 [2024-07-21 11:47:30.768532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.768549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183e00 00:24:01.402 [2024-07-21 11:47:30.768562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.768579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x183e00 00:24:01.402 [2024-07-21 11:47:30.768592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.768610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d164000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.768622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.768644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b484000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.768658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.768676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4a5000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.768689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.768707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011a5a000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.768720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.768738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e13f000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.768751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.768769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110af000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.768782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.768800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011196000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.768813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.768831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011175000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.768844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.768863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011154000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.768877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.768896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000112e0000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.768909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.768927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013443000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.768940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.768958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013422000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.768972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.768990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013401000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.769003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.769021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000133e0000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.769034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.769052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c66f000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.769065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.769084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c64e000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.769097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.769115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b463000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.769128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.769146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b442000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.769159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.769178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b421000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.769191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.769210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b400000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.769223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.769243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca8f000 len:0x10000 key:0x184400 00:24:01.402 [2024-07-21 11:47:30.769256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:15432000 sqhd:5310 p:0 m:0 dnr:0 00:24:01.402 [2024-07-21 11:47:30.787842] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806c00 was disconnected and freed. reset controller. 00:24:01.402 [2024-07-21 11:47:30.787861] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.402 [2024-07-21 11:47:30.788166] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:01.402 [2024-07-21 11:47:30.788179] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:01.402 [2024-07-21 11:47:30.788187] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928f500 00:24:01.402 [2024-07-21 11:47:30.788288] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.402 [2024-07-21 11:47:30.788302] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.402 [2024-07-21 11:47:30.788314] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.402 [2024-07-21 11:47:30.788326] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.402 [2024-07-21 11:47:30.788338] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.402 [2024-07-21 11:47:30.788349] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.402 [2024-07-21 11:47:30.788361] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.402 [2024-07-21 11:47:30.788373] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:01.402 [2024-07-21 11:47:30.789162] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:01.661 task offset: 66816 on job bdev=Nvme10n1 fails 00:24:01.661 00:24:01.661 Latency(us) 00:24:01.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.661 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.661 Job: Nvme1n1 ended in about 1.99 seconds with error 00:24:01.661 Verification LBA range: start 0x0 length 0x400 00:24:01.661 Nvme1n1 : 1.99 298.23 18.64 32.24 0.00 190185.47 39007.03 1060320.05 00:24:01.661 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.661 Job: Nvme2n1 ended in about 1.93 seconds with error 00:24:01.661 Verification LBA range: start 0x0 length 0x400 00:24:01.661 Nvme2n1 : 1.93 307.11 19.19 33.20 0.00 186649.41 39845.89 1080452.71 00:24:01.661 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.661 Job: Nvme3n1 ended in about 1.93 seconds with error 00:24:01.661 Verification LBA range: start 0x0 length 0x400 00:24:01.661 Nvme3n1 : 1.93 309.37 19.34 33.11 0.00 184860.62 40265.32 1080452.71 00:24:01.661 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.661 Job: Nvme4n1 ended in about 1.94 seconds with error 00:24:01.661 Verification LBA range: start 0x0 length 0x400 00:24:01.661 Nvme4n1 : 1.94 318.32 19.89 33.02 0.00 179555.48 37958.45 1073741.82 00:24:01.661 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.661 Job: Nvme5n1 ended in about 1.94 seconds with error 00:24:01.661 Verification LBA range: start 0x0 length 0x400 00:24:01.661 Nvme5n1 : 1.94 323.13 20.20 32.93 0.00 176531.86 35651.58 1073741.82 00:24:01.661 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.661 Job: Nvme6n1 ended in about 1.95 seconds with error 00:24:01.661 Verification LBA range: start 0x0 length 0x400 00:24:01.661 Nvme6n1 : 1.95 322.19 20.14 32.84 0.00 176484.43 36280.73 1073741.82 00:24:01.661 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.661 Job: Nvme7n1 ended in about 1.95 seconds with error 00:24:01.661 Verification LBA range: start 0x0 length 0x400 00:24:01.661 Nvme7n1 : 1.95 321.25 20.08 32.74 0.00 176428.52 37119.59 1073741.82 00:24:01.661 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.661 Job: Nvme8n1 ended in about 1.96 seconds with error 00:24:01.661 Verification LBA range: start 0x0 length 0x400 00:24:01.661 Nvme8n1 : 1.96 320.34 20.02 32.65 0.00 176351.09 37958.45 1073741.82 00:24:01.661 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.661 Job: Nvme9n1 ended in about 1.97 seconds with error 00:24:01.661 Verification LBA range: start 0x0 length 0x400 00:24:01.661 Nvme9n1 : 1.97 254.85 15.93 32.56 0.00 215941.89 51170.51 1073741.82 00:24:01.661 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.661 Job: Nvme10n1 ended in about 1.90 seconds with error 00:24:01.661 Verification LBA range: start 0x0 length 0x400 00:24:01.661 Nvme10n1 : 1.90 263.31 16.46 33.64 0.00 207969.92 51380.22 1067030.94 00:24:01.661 =================================================================================================================== 00:24:01.661 Total : 3038.10 189.88 328.91 0.00 186140.64 35651.58 1080452.71 00:24:01.661 [2024-07-21 11:47:30.811175] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:01.661 [2024-07-21 11:47:30.811198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:01.661 [2024-07-21 11:47:30.811212] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:01.661 [2024-07-21 11:47:30.811223] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:01.661 [2024-07-21 11:47:30.811233] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:01.661 [2024-07-21 11:47:30.811244] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:01.661 [2024-07-21 11:47:30.811254] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:01.661 [2024-07-21 11:47:30.811264] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:01.661 [2024-07-21 11:47:30.823247] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:01.661 [2024-07-21 11:47:30.823308] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:01.661 [2024-07-21 11:47:30.823338] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e53c0 00:24:01.661 [2024-07-21 11:47:30.823506] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:01.661 [2024-07-21 11:47:30.823544] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:01.661 [2024-07-21 11:47:30.823569] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e180 00:24:01.661 [2024-07-21 11:47:30.823724] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:01.661 [2024-07-21 11:47:30.823760] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:01.662 [2024-07-21 11:47:30.823786] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929c180 00:24:01.662 [2024-07-21 11:47:30.823906] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:01.662 [2024-07-21 11:47:30.823924] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:01.662 [2024-07-21 11:47:30.823934] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a89c0 00:24:01.662 [2024-07-21 11:47:30.824006] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:01.662 [2024-07-21 11:47:30.824021] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:01.662 [2024-07-21 11:47:30.824031] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bd540 00:24:01.662 [2024-07-21 11:47:30.824126] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:01.662 [2024-07-21 11:47:30.824141] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:01.662 [2024-07-21 11:47:30.824151] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6100 00:24:01.662 [2024-07-21 11:47:30.824224] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:01.662 [2024-07-21 11:47:30.824238] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:01.662 [2024-07-21 11:47:30.824249] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192dc7c0 00:24:01.662 [2024-07-21 11:47:30.824358] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:01.662 [2024-07-21 11:47:30.824372] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:01.662 [2024-07-21 11:47:30.824382] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ba580 00:24:01.919 11:47:31 -- target/shutdown.sh@141 -- # kill -9 2444025 00:24:01.919 11:47:31 -- target/shutdown.sh@143 -- # stoptarget 00:24:01.919 11:47:31 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:01.919 11:47:31 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:01.919 11:47:31 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:01.919 11:47:31 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:01.919 11:47:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:01.919 11:47:31 -- nvmf/common.sh@116 -- # sync 00:24:01.919 11:47:31 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:01.919 11:47:31 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:01.919 11:47:31 -- nvmf/common.sh@119 -- # set +e 00:24:01.919 11:47:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:01.919 11:47:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:01.919 rmmod nvme_rdma 00:24:01.919 rmmod nvme_fabrics 00:24:01.919 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 120: 2444025 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:24:01.919 11:47:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:01.919 11:47:31 -- nvmf/common.sh@123 -- # set -e 00:24:01.919 11:47:31 -- nvmf/common.sh@124 -- # return 0 00:24:01.919 11:47:31 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:24:01.919 11:47:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:01.919 11:47:31 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:01.919 00:24:01.919 real 0m5.257s 00:24:01.919 user 0m17.984s 00:24:01.919 sys 0m1.360s 00:24:01.919 11:47:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.919 11:47:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.919 ************************************ 00:24:01.919 END TEST nvmf_shutdown_tc3 00:24:01.920 ************************************ 00:24:01.920 11:47:31 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:24:01.920 00:24:01.920 real 0m26.859s 00:24:01.920 user 1m14.375s 00:24:01.920 sys 0m10.471s 00:24:01.920 11:47:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.920 11:47:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.920 ************************************ 00:24:01.920 END TEST nvmf_shutdown 00:24:01.920 ************************************ 00:24:01.920 11:47:31 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:24:01.920 11:47:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:01.920 11:47:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.920 11:47:31 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:24:01.920 11:47:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:01.920 11:47:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.920 11:47:31 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:24:01.920 11:47:31 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:24:01.920 11:47:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:01.920 11:47:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:01.920 11:47:31 -- common/autotest_common.sh@10 -- # set +x 00:24:02.177 ************************************ 00:24:02.177 START TEST nvmf_multicontroller 00:24:02.177 ************************************ 00:24:02.177 11:47:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:24:02.177 * Looking for test storage... 00:24:02.177 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:02.177 11:47:31 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.177 11:47:31 -- nvmf/common.sh@7 -- # uname -s 00:24:02.177 11:47:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.177 11:47:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.177 11:47:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.177 11:47:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.177 11:47:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.177 11:47:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.177 11:47:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.177 11:47:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.177 11:47:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.177 11:47:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.177 11:47:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:02.177 11:47:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:02.177 11:47:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.177 11:47:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.177 11:47:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.177 11:47:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:02.177 11:47:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.177 11:47:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.177 11:47:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.177 11:47:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.177 11:47:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.177 11:47:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.177 11:47:31 -- paths/export.sh@5 -- # export PATH 00:24:02.177 11:47:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.177 11:47:31 -- nvmf/common.sh@46 -- # : 0 00:24:02.177 11:47:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:02.177 11:47:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:02.177 11:47:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:02.177 11:47:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.177 11:47:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.177 11:47:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:02.177 11:47:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:02.177 11:47:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:02.177 11:47:31 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:02.177 11:47:31 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:02.177 11:47:31 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:02.177 11:47:31 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:02.177 11:47:31 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:02.177 11:47:31 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:24:02.177 11:47:31 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:02.177 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:02.177 11:47:31 -- host/multicontroller.sh@20 -- # exit 0 00:24:02.177 00:24:02.177 real 0m0.128s 00:24:02.177 user 0m0.054s 00:24:02.177 sys 0m0.084s 00:24:02.177 11:47:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.177 11:47:31 -- common/autotest_common.sh@10 -- # set +x 00:24:02.177 ************************************ 00:24:02.177 END TEST nvmf_multicontroller 00:24:02.177 ************************************ 00:24:02.177 11:47:31 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:24:02.177 11:47:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:02.177 11:47:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:02.178 11:47:31 -- common/autotest_common.sh@10 -- # set +x 00:24:02.178 ************************************ 00:24:02.178 START TEST nvmf_aer 00:24:02.178 ************************************ 00:24:02.178 11:47:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:24:02.436 * Looking for test storage... 00:24:02.436 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:02.436 11:47:31 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.436 11:47:31 -- nvmf/common.sh@7 -- # uname -s 00:24:02.436 11:47:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.436 11:47:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.436 11:47:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.436 11:47:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.436 11:47:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.436 11:47:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.436 11:47:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.436 11:47:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.436 11:47:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.436 11:47:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.436 11:47:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:02.436 11:47:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:02.436 11:47:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.436 11:47:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.436 11:47:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.436 11:47:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:02.436 11:47:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.436 11:47:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.436 11:47:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.436 11:47:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.436 11:47:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.436 11:47:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.436 11:47:31 -- paths/export.sh@5 -- # export PATH 00:24:02.436 11:47:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.436 11:47:31 -- nvmf/common.sh@46 -- # : 0 00:24:02.436 11:47:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:02.436 11:47:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:02.436 11:47:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:02.436 11:47:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.436 11:47:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.436 11:47:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:02.436 11:47:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:02.436 11:47:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:02.436 11:47:31 -- host/aer.sh@11 -- # nvmftestinit 00:24:02.436 11:47:31 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:02.436 11:47:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.436 11:47:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:02.436 11:47:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:02.436 11:47:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:02.436 11:47:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.436 11:47:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.436 11:47:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.436 11:47:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:02.436 11:47:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:02.436 11:47:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:02.436 11:47:31 -- common/autotest_common.sh@10 -- # set +x 00:24:10.549 11:47:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:10.549 11:47:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:10.549 11:47:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:10.549 11:47:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:10.549 11:47:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:10.549 11:47:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:10.549 11:47:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:10.549 11:47:39 -- nvmf/common.sh@294 -- # net_devs=() 00:24:10.549 11:47:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:10.549 11:47:39 -- nvmf/common.sh@295 -- # e810=() 00:24:10.549 11:47:39 -- nvmf/common.sh@295 -- # local -ga e810 00:24:10.549 11:47:39 -- nvmf/common.sh@296 -- # x722=() 00:24:10.549 11:47:39 -- nvmf/common.sh@296 -- # local -ga x722 00:24:10.549 11:47:39 -- nvmf/common.sh@297 -- # mlx=() 00:24:10.549 11:47:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:10.549 11:47:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.549 11:47:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.549 11:47:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.549 11:47:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.549 11:47:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.549 11:47:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.549 11:47:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.549 11:47:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.549 11:47:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.549 11:47:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.549 11:47:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.549 11:47:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:10.549 11:47:39 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:10.549 11:47:39 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:10.549 11:47:39 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:10.549 11:47:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:10.549 11:47:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:10.549 11:47:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:10.549 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:10.549 11:47:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:10.549 11:47:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:10.549 11:47:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:10.549 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:10.549 11:47:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:10.549 11:47:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:10.549 11:47:39 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:10.549 11:47:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.549 11:47:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:10.549 11:47:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.549 11:47:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:10.549 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:10.549 11:47:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.549 11:47:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:10.549 11:47:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.549 11:47:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:10.549 11:47:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.549 11:47:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:10.549 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:10.549 11:47:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.549 11:47:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:10.549 11:47:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:10.549 11:47:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:10.549 11:47:39 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:10.549 11:47:39 -- nvmf/common.sh@57 -- # uname 00:24:10.549 11:47:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:10.549 11:47:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:10.549 11:47:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:10.549 11:47:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:10.549 11:47:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:10.549 11:47:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:10.549 11:47:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:10.549 11:47:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:10.549 11:47:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:10.549 11:47:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:10.549 11:47:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:10.549 11:47:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:10.549 11:47:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:10.549 11:47:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:10.549 11:47:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:10.549 11:47:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:10.549 11:47:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:10.549 11:47:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.549 11:47:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:10.549 11:47:39 -- nvmf/common.sh@104 -- # continue 2 00:24:10.549 11:47:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:10.549 11:47:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.549 11:47:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.549 11:47:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:10.549 11:47:39 -- nvmf/common.sh@104 -- # continue 2 00:24:10.549 11:47:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:10.549 11:47:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:10.549 11:47:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:10.549 11:47:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:10.549 11:47:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:10.549 11:47:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:10.549 11:47:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:10.549 11:47:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:10.549 11:47:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:10.549 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:10.549 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:10.549 altname enp217s0f0np0 00:24:10.549 altname ens818f0np0 00:24:10.549 inet 192.168.100.8/24 scope global mlx_0_0 00:24:10.549 valid_lft forever preferred_lft forever 00:24:10.549 11:47:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:10.549 11:47:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:10.549 11:47:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:10.549 11:47:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:10.549 11:47:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:10.549 11:47:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:10.550 11:47:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:10.550 11:47:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:10.550 11:47:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:10.550 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:10.550 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:10.550 altname enp217s0f1np1 00:24:10.550 altname ens818f1np1 00:24:10.550 inet 192.168.100.9/24 scope global mlx_0_1 00:24:10.550 valid_lft forever preferred_lft forever 00:24:10.550 11:47:39 -- nvmf/common.sh@410 -- # return 0 00:24:10.550 11:47:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:10.550 11:47:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:10.550 11:47:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:10.550 11:47:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:10.550 11:47:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:10.550 11:47:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:10.550 11:47:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:10.550 11:47:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:10.550 11:47:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:10.550 11:47:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:10.550 11:47:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:10.550 11:47:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.550 11:47:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:10.550 11:47:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:10.550 11:47:39 -- nvmf/common.sh@104 -- # continue 2 00:24:10.550 11:47:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:10.550 11:47:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.550 11:47:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:10.550 11:47:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.550 11:47:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:10.550 11:47:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:10.550 11:47:39 -- nvmf/common.sh@104 -- # continue 2 00:24:10.550 11:47:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:10.550 11:47:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:10.550 11:47:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:10.550 11:47:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:10.550 11:47:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:10.550 11:47:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:10.550 11:47:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:10.550 11:47:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:10.550 11:47:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:10.550 11:47:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:10.550 11:47:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:10.550 11:47:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:10.550 11:47:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:10.550 192.168.100.9' 00:24:10.550 11:47:39 -- nvmf/common.sh@445 -- # head -n 1 00:24:10.550 11:47:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:10.550 192.168.100.9' 00:24:10.550 11:47:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:10.550 11:47:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:10.550 192.168.100.9' 00:24:10.550 11:47:39 -- nvmf/common.sh@446 -- # tail -n +2 00:24:10.550 11:47:39 -- nvmf/common.sh@446 -- # head -n 1 00:24:10.550 11:47:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:10.550 11:47:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:10.550 11:47:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:10.550 11:47:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:10.550 11:47:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:10.550 11:47:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:10.550 11:47:39 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:10.550 11:47:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:10.550 11:47:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:10.550 11:47:39 -- common/autotest_common.sh@10 -- # set +x 00:24:10.550 11:47:39 -- nvmf/common.sh@469 -- # nvmfpid=2448699 00:24:10.550 11:47:39 -- nvmf/common.sh@470 -- # waitforlisten 2448699 00:24:10.550 11:47:39 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:10.550 11:47:39 -- common/autotest_common.sh@819 -- # '[' -z 2448699 ']' 00:24:10.550 11:47:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.550 11:47:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:10.550 11:47:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.550 11:47:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:10.550 11:47:39 -- common/autotest_common.sh@10 -- # set +x 00:24:10.550 [2024-07-21 11:47:39.900513] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:10.550 [2024-07-21 11:47:39.900570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.550 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.809 [2024-07-21 11:47:39.990695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:10.809 [2024-07-21 11:47:40.029844] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:10.809 [2024-07-21 11:47:40.029952] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.809 [2024-07-21 11:47:40.029962] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.809 [2024-07-21 11:47:40.029972] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.809 [2024-07-21 11:47:40.030009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.809 [2024-07-21 11:47:40.030106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.809 [2024-07-21 11:47:40.030127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.809 [2024-07-21 11:47:40.030129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.376 11:47:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:11.376 11:47:40 -- common/autotest_common.sh@852 -- # return 0 00:24:11.376 11:47:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:11.376 11:47:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:11.376 11:47:40 -- common/autotest_common.sh@10 -- # set +x 00:24:11.376 11:47:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.376 11:47:40 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:11.376 11:47:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.376 11:47:40 -- common/autotest_common.sh@10 -- # set +x 00:24:11.376 [2024-07-21 11:47:40.773019] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d654b0/0x1d699a0) succeed. 00:24:11.376 [2024-07-21 11:47:40.783274] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d66aa0/0x1dab030) succeed. 00:24:11.635 11:47:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.635 11:47:40 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:11.635 11:47:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.635 11:47:40 -- common/autotest_common.sh@10 -- # set +x 00:24:11.635 Malloc0 00:24:11.635 11:47:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.635 11:47:40 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:11.635 11:47:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.635 11:47:40 -- common/autotest_common.sh@10 -- # set +x 00:24:11.635 11:47:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.635 11:47:40 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:11.635 11:47:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.635 11:47:40 -- common/autotest_common.sh@10 -- # set +x 00:24:11.635 11:47:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.635 11:47:40 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:11.635 11:47:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.635 11:47:40 -- common/autotest_common.sh@10 -- # set +x 00:24:11.635 [2024-07-21 11:47:40.950420] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:11.635 11:47:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.635 11:47:40 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:11.635 11:47:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.635 11:47:40 -- common/autotest_common.sh@10 -- # set +x 00:24:11.635 [2024-07-21 11:47:40.958104] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:11.635 [ 00:24:11.635 { 00:24:11.635 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:11.635 "subtype": "Discovery", 00:24:11.635 "listen_addresses": [], 00:24:11.635 "allow_any_host": true, 00:24:11.635 "hosts": [] 00:24:11.635 }, 00:24:11.635 { 00:24:11.635 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.635 "subtype": "NVMe", 00:24:11.635 "listen_addresses": [ 00:24:11.635 { 00:24:11.635 "transport": "RDMA", 00:24:11.635 "trtype": "RDMA", 00:24:11.635 "adrfam": "IPv4", 00:24:11.635 "traddr": "192.168.100.8", 00:24:11.635 "trsvcid": "4420" 00:24:11.635 } 00:24:11.635 ], 00:24:11.635 "allow_any_host": true, 00:24:11.635 "hosts": [], 00:24:11.635 "serial_number": "SPDK00000000000001", 00:24:11.635 "model_number": "SPDK bdev Controller", 00:24:11.635 "max_namespaces": 2, 00:24:11.635 "min_cntlid": 1, 00:24:11.635 "max_cntlid": 65519, 00:24:11.635 "namespaces": [ 00:24:11.635 { 00:24:11.635 "nsid": 1, 00:24:11.635 "bdev_name": "Malloc0", 00:24:11.635 "name": "Malloc0", 00:24:11.635 "nguid": "C1D94F567F074351BE15A1E67A602697", 00:24:11.635 "uuid": "c1d94f56-7f07-4351-be15-a1e67a602697" 00:24:11.635 } 00:24:11.635 ] 00:24:11.635 } 00:24:11.635 ] 00:24:11.635 11:47:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.635 11:47:40 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:11.635 11:47:40 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:11.635 11:47:40 -- host/aer.sh@33 -- # aerpid=2448890 00:24:11.635 11:47:40 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:11.635 11:47:40 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:11.635 11:47:40 -- common/autotest_common.sh@1244 -- # local i=0 00:24:11.635 11:47:40 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.635 11:47:40 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:24:11.635 11:47:40 -- common/autotest_common.sh@1247 -- # i=1 00:24:11.635 11:47:40 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:24:11.635 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.894 11:47:41 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.894 11:47:41 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:24:11.894 11:47:41 -- common/autotest_common.sh@1247 -- # i=2 00:24:11.894 11:47:41 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:24:11.894 11:47:41 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.894 11:47:41 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.894 11:47:41 -- common/autotest_common.sh@1255 -- # return 0 00:24:11.894 11:47:41 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:11.894 11:47:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.894 11:47:41 -- common/autotest_common.sh@10 -- # set +x 00:24:11.894 Malloc1 00:24:11.894 11:47:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.894 11:47:41 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:11.894 11:47:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.894 11:47:41 -- common/autotest_common.sh@10 -- # set +x 00:24:11.894 11:47:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.894 11:47:41 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:11.894 11:47:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.894 11:47:41 -- common/autotest_common.sh@10 -- # set +x 00:24:11.894 [ 00:24:11.894 { 00:24:11.894 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:11.894 "subtype": "Discovery", 00:24:11.894 "listen_addresses": [], 00:24:11.894 "allow_any_host": true, 00:24:11.894 "hosts": [] 00:24:11.894 }, 00:24:11.894 { 00:24:11.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.894 "subtype": "NVMe", 00:24:11.894 "listen_addresses": [ 00:24:11.894 { 00:24:11.894 "transport": "RDMA", 00:24:11.894 "trtype": "RDMA", 00:24:11.894 "adrfam": "IPv4", 00:24:11.894 "traddr": "192.168.100.8", 00:24:11.894 "trsvcid": "4420" 00:24:11.894 } 00:24:11.894 ], 00:24:11.894 "allow_any_host": true, 00:24:11.894 "hosts": [], 00:24:11.894 "serial_number": "SPDK00000000000001", 00:24:11.894 "model_number": "SPDK bdev Controller", 00:24:11.894 "max_namespaces": 2, 00:24:11.894 "min_cntlid": 1, 00:24:11.894 "max_cntlid": 65519, 00:24:11.894 "namespaces": [ 00:24:11.894 { 00:24:11.894 "nsid": 1, 00:24:11.894 "bdev_name": "Malloc0", 00:24:11.894 "name": "Malloc0", 00:24:11.894 "nguid": "C1D94F567F074351BE15A1E67A602697", 00:24:11.894 "uuid": "c1d94f56-7f07-4351-be15-a1e67a602697" 00:24:11.894 }, 00:24:11.894 { 00:24:11.894 "nsid": 2, 00:24:11.894 "bdev_name": "Malloc1", 00:24:11.894 "name": "Malloc1", 00:24:11.894 "nguid": "466C6179E5824FD9B440212D08956B33", 00:24:11.894 "uuid": "466c6179-e582-4fd9-b440-212d08956b33" 00:24:11.894 } 00:24:11.894 ] 00:24:11.894 } 00:24:11.894 ] 00:24:11.894 11:47:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.894 11:47:41 -- host/aer.sh@43 -- # wait 2448890 00:24:11.894 Asynchronous Event Request test 00:24:11.894 Attaching to 192.168.100.8 00:24:11.894 Attached to 192.168.100.8 00:24:11.894 Registering asynchronous event callbacks... 00:24:11.894 Starting namespace attribute notice tests for all controllers... 00:24:11.894 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:11.894 aer_cb - Changed Namespace 00:24:11.894 Cleaning up... 00:24:11.894 11:47:41 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:11.894 11:47:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.894 11:47:41 -- common/autotest_common.sh@10 -- # set +x 00:24:12.153 11:47:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:12.153 11:47:41 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:12.153 11:47:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:12.153 11:47:41 -- common/autotest_common.sh@10 -- # set +x 00:24:12.153 11:47:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:12.153 11:47:41 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:12.153 11:47:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:12.153 11:47:41 -- common/autotest_common.sh@10 -- # set +x 00:24:12.153 11:47:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:12.153 11:47:41 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:12.153 11:47:41 -- host/aer.sh@51 -- # nvmftestfini 00:24:12.153 11:47:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:12.153 11:47:41 -- nvmf/common.sh@116 -- # sync 00:24:12.153 11:47:41 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:12.153 11:47:41 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:12.153 11:47:41 -- nvmf/common.sh@119 -- # set +e 00:24:12.153 11:47:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:12.153 11:47:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:12.153 rmmod nvme_rdma 00:24:12.153 rmmod nvme_fabrics 00:24:12.153 11:47:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:12.153 11:47:41 -- nvmf/common.sh@123 -- # set -e 00:24:12.153 11:47:41 -- nvmf/common.sh@124 -- # return 0 00:24:12.153 11:47:41 -- nvmf/common.sh@477 -- # '[' -n 2448699 ']' 00:24:12.153 11:47:41 -- nvmf/common.sh@478 -- # killprocess 2448699 00:24:12.153 11:47:41 -- common/autotest_common.sh@926 -- # '[' -z 2448699 ']' 00:24:12.153 11:47:41 -- common/autotest_common.sh@930 -- # kill -0 2448699 00:24:12.153 11:47:41 -- common/autotest_common.sh@931 -- # uname 00:24:12.153 11:47:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:12.153 11:47:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2448699 00:24:12.153 11:47:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:12.153 11:47:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:12.153 11:47:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2448699' 00:24:12.153 killing process with pid 2448699 00:24:12.153 11:47:41 -- common/autotest_common.sh@945 -- # kill 2448699 00:24:12.153 [2024-07-21 11:47:41.467068] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:12.153 11:47:41 -- common/autotest_common.sh@950 -- # wait 2448699 00:24:12.410 11:47:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:12.411 11:47:41 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:12.411 00:24:12.411 real 0m10.196s 00:24:12.411 user 0m8.947s 00:24:12.411 sys 0m6.786s 00:24:12.411 11:47:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.411 11:47:41 -- common/autotest_common.sh@10 -- # set +x 00:24:12.411 ************************************ 00:24:12.411 END TEST nvmf_aer 00:24:12.411 ************************************ 00:24:12.411 11:47:41 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:24:12.411 11:47:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:12.411 11:47:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:12.411 11:47:41 -- common/autotest_common.sh@10 -- # set +x 00:24:12.411 ************************************ 00:24:12.411 START TEST nvmf_async_init 00:24:12.411 ************************************ 00:24:12.411 11:47:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:24:12.669 * Looking for test storage... 00:24:12.669 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:12.669 11:47:41 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.669 11:47:41 -- nvmf/common.sh@7 -- # uname -s 00:24:12.669 11:47:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.669 11:47:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.669 11:47:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.669 11:47:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.669 11:47:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.669 11:47:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.669 11:47:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.669 11:47:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.669 11:47:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.669 11:47:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.669 11:47:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:12.669 11:47:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:12.669 11:47:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.669 11:47:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.669 11:47:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.669 11:47:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:12.669 11:47:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.669 11:47:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.669 11:47:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.669 11:47:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.669 11:47:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.669 11:47:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.669 11:47:41 -- paths/export.sh@5 -- # export PATH 00:24:12.669 11:47:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.669 11:47:41 -- nvmf/common.sh@46 -- # : 0 00:24:12.669 11:47:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:12.669 11:47:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:12.669 11:47:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:12.669 11:47:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.669 11:47:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.669 11:47:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:12.669 11:47:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:12.669 11:47:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:12.669 11:47:41 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:12.669 11:47:41 -- host/async_init.sh@14 -- # null_block_size=512 00:24:12.669 11:47:41 -- host/async_init.sh@15 -- # null_bdev=null0 00:24:12.669 11:47:41 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:12.669 11:47:41 -- host/async_init.sh@20 -- # uuidgen 00:24:12.669 11:47:41 -- host/async_init.sh@20 -- # tr -d - 00:24:12.669 11:47:41 -- host/async_init.sh@20 -- # nguid=1c827608944044e49433f8e718faa813 00:24:12.669 11:47:41 -- host/async_init.sh@22 -- # nvmftestinit 00:24:12.669 11:47:41 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:12.669 11:47:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.669 11:47:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:12.669 11:47:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:12.669 11:47:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:12.669 11:47:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.669 11:47:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.669 11:47:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.669 11:47:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:12.669 11:47:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:12.669 11:47:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:12.669 11:47:41 -- common/autotest_common.sh@10 -- # set +x 00:24:20.825 11:47:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:20.825 11:47:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:20.825 11:47:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:20.825 11:47:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:20.825 11:47:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:20.825 11:47:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:20.825 11:47:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:20.825 11:47:49 -- nvmf/common.sh@294 -- # net_devs=() 00:24:20.825 11:47:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:20.825 11:47:49 -- nvmf/common.sh@295 -- # e810=() 00:24:20.825 11:47:49 -- nvmf/common.sh@295 -- # local -ga e810 00:24:20.825 11:47:49 -- nvmf/common.sh@296 -- # x722=() 00:24:20.825 11:47:49 -- nvmf/common.sh@296 -- # local -ga x722 00:24:20.825 11:47:49 -- nvmf/common.sh@297 -- # mlx=() 00:24:20.825 11:47:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:20.825 11:47:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.825 11:47:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.825 11:47:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.825 11:47:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.825 11:47:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.825 11:47:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.825 11:47:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.825 11:47:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.825 11:47:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.825 11:47:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.825 11:47:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.825 11:47:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:20.825 11:47:49 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:20.825 11:47:49 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:20.825 11:47:49 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:20.825 11:47:49 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:20.825 11:47:49 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:20.825 11:47:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:20.825 11:47:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:20.825 11:47:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:20.825 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:20.825 11:47:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:20.825 11:47:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:20.825 11:47:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:20.825 11:47:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:20.825 11:47:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:20.825 11:47:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:20.825 11:47:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:20.825 11:47:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:20.825 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:20.825 11:47:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:20.825 11:47:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:20.826 11:47:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:20.826 11:47:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:20.826 11:47:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:20.826 11:47:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:20.826 11:47:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:20.826 11:47:49 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:20.826 11:47:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:20.826 11:47:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.826 11:47:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:20.826 11:47:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.826 11:47:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:20.826 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:20.826 11:47:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.826 11:47:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:20.826 11:47:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.826 11:47:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:20.826 11:47:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.826 11:47:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:20.826 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:20.826 11:47:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.826 11:47:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:20.826 11:47:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:20.826 11:47:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:20.826 11:47:49 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:20.826 11:47:49 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:20.826 11:47:49 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:20.826 11:47:49 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:20.826 11:47:49 -- nvmf/common.sh@57 -- # uname 00:24:20.826 11:47:49 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:20.826 11:47:49 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:20.826 11:47:49 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:20.826 11:47:49 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:20.826 11:47:49 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:20.826 11:47:49 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:20.826 11:47:49 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:20.826 11:47:49 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:20.826 11:47:49 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:20.826 11:47:49 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:20.826 11:47:49 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:20.826 11:47:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:20.826 11:47:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:20.826 11:47:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:20.826 11:47:50 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:20.826 11:47:50 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:20.826 11:47:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:20.826 11:47:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.826 11:47:50 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:20.826 11:47:50 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:20.826 11:47:50 -- nvmf/common.sh@104 -- # continue 2 00:24:20.826 11:47:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:20.826 11:47:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.826 11:47:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:20.826 11:47:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.826 11:47:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:20.826 11:47:50 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:20.826 11:47:50 -- nvmf/common.sh@104 -- # continue 2 00:24:20.826 11:47:50 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:20.826 11:47:50 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:20.826 11:47:50 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:20.826 11:47:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:20.826 11:47:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:20.826 11:47:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:20.826 11:47:50 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:20.826 11:47:50 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:20.826 11:47:50 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:20.826 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:20.826 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:20.826 altname enp217s0f0np0 00:24:20.826 altname ens818f0np0 00:24:20.826 inet 192.168.100.8/24 scope global mlx_0_0 00:24:20.826 valid_lft forever preferred_lft forever 00:24:20.826 11:47:50 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:20.826 11:47:50 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:20.826 11:47:50 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:20.826 11:47:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:20.826 11:47:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:20.826 11:47:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:20.826 11:47:50 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:20.826 11:47:50 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:20.826 11:47:50 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:20.826 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:20.826 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:20.826 altname enp217s0f1np1 00:24:20.826 altname ens818f1np1 00:24:20.826 inet 192.168.100.9/24 scope global mlx_0_1 00:24:20.826 valid_lft forever preferred_lft forever 00:24:20.826 11:47:50 -- nvmf/common.sh@410 -- # return 0 00:24:20.826 11:47:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:20.826 11:47:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:20.826 11:47:50 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:20.826 11:47:50 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:20.826 11:47:50 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:20.826 11:47:50 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:20.826 11:47:50 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:20.826 11:47:50 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:20.826 11:47:50 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:20.826 11:47:50 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:20.826 11:47:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:20.826 11:47:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.826 11:47:50 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:20.826 11:47:50 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:20.826 11:47:50 -- nvmf/common.sh@104 -- # continue 2 00:24:20.826 11:47:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:20.826 11:47:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.826 11:47:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:20.826 11:47:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.826 11:47:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:20.826 11:47:50 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:20.826 11:47:50 -- nvmf/common.sh@104 -- # continue 2 00:24:20.826 11:47:50 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:20.826 11:47:50 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:20.826 11:47:50 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:20.826 11:47:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:20.826 11:47:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:20.826 11:47:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:20.826 11:47:50 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:20.826 11:47:50 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:20.826 11:47:50 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:20.826 11:47:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:20.826 11:47:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:20.826 11:47:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:20.826 11:47:50 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:20.826 192.168.100.9' 00:24:20.826 11:47:50 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:20.826 192.168.100.9' 00:24:20.826 11:47:50 -- nvmf/common.sh@445 -- # head -n 1 00:24:20.826 11:47:50 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:20.826 11:47:50 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:20.826 192.168.100.9' 00:24:20.826 11:47:50 -- nvmf/common.sh@446 -- # tail -n +2 00:24:20.826 11:47:50 -- nvmf/common.sh@446 -- # head -n 1 00:24:20.826 11:47:50 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:20.826 11:47:50 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:20.826 11:47:50 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:20.826 11:47:50 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:20.826 11:47:50 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:20.826 11:47:50 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:20.826 11:47:50 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:20.826 11:47:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:20.826 11:47:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:20.826 11:47:50 -- common/autotest_common.sh@10 -- # set +x 00:24:20.826 11:47:50 -- nvmf/common.sh@469 -- # nvmfpid=2453094 00:24:20.826 11:47:50 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:20.826 11:47:50 -- nvmf/common.sh@470 -- # waitforlisten 2453094 00:24:20.826 11:47:50 -- common/autotest_common.sh@819 -- # '[' -z 2453094 ']' 00:24:20.826 11:47:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.826 11:47:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:20.826 11:47:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.826 11:47:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:20.826 11:47:50 -- common/autotest_common.sh@10 -- # set +x 00:24:20.826 [2024-07-21 11:47:50.218192] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:20.826 [2024-07-21 11:47:50.218241] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.083 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.083 [2024-07-21 11:47:50.303814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.083 [2024-07-21 11:47:50.341357] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:21.083 [2024-07-21 11:47:50.341461] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.083 [2024-07-21 11:47:50.341471] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.083 [2024-07-21 11:47:50.341480] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.083 [2024-07-21 11:47:50.341504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.646 11:47:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:21.646 11:47:51 -- common/autotest_common.sh@852 -- # return 0 00:24:21.646 11:47:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:21.646 11:47:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:21.646 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.646 11:47:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.646 11:47:51 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:21.646 11:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.646 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.903 [2024-07-21 11:47:51.079093] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2437320/0x243b810) succeed. 00:24:21.903 [2024-07-21 11:47:51.088288] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2438820/0x247cea0) succeed. 00:24:21.903 11:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.903 11:47:51 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:21.903 11:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.903 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.903 null0 00:24:21.903 11:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.903 11:47:51 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:21.903 11:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.903 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.903 11:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.903 11:47:51 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:21.903 11:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.903 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.903 11:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.903 11:47:51 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1c827608944044e49433f8e718faa813 00:24:21.903 11:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.903 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.903 11:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.903 11:47:51 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:21.903 11:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.903 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.903 [2024-07-21 11:47:51.174952] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:21.903 11:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.903 11:47:51 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:21.903 11:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.903 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.903 nvme0n1 00:24:21.903 11:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.903 11:47:51 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:21.903 11:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.903 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.903 [ 00:24:21.903 { 00:24:21.903 "name": "nvme0n1", 00:24:21.903 "aliases": [ 00:24:21.903 "1c827608-9440-44e4-9433-f8e718faa813" 00:24:21.903 ], 00:24:21.903 "product_name": "NVMe disk", 00:24:21.903 "block_size": 512, 00:24:21.903 "num_blocks": 2097152, 00:24:21.903 "uuid": "1c827608-9440-44e4-9433-f8e718faa813", 00:24:21.903 "assigned_rate_limits": { 00:24:21.903 "rw_ios_per_sec": 0, 00:24:21.903 "rw_mbytes_per_sec": 0, 00:24:21.903 "r_mbytes_per_sec": 0, 00:24:21.903 "w_mbytes_per_sec": 0 00:24:21.903 }, 00:24:21.903 "claimed": false, 00:24:21.903 "zoned": false, 00:24:21.903 "supported_io_types": { 00:24:21.903 "read": true, 00:24:21.903 "write": true, 00:24:21.903 "unmap": false, 00:24:21.903 "write_zeroes": true, 00:24:21.903 "flush": true, 00:24:21.903 "reset": true, 00:24:21.903 "compare": true, 00:24:21.903 "compare_and_write": true, 00:24:21.904 "abort": true, 00:24:21.904 "nvme_admin": true, 00:24:21.904 "nvme_io": true 00:24:21.904 }, 00:24:21.904 "memory_domains": [ 00:24:21.904 { 00:24:21.904 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:21.904 "dma_device_type": 0 00:24:21.904 } 00:24:21.904 ], 00:24:21.904 "driver_specific": { 00:24:21.904 "nvme": [ 00:24:21.904 { 00:24:21.904 "trid": { 00:24:21.904 "trtype": "RDMA", 00:24:21.904 "adrfam": "IPv4", 00:24:21.904 "traddr": "192.168.100.8", 00:24:21.904 "trsvcid": "4420", 00:24:21.904 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:21.904 }, 00:24:21.904 "ctrlr_data": { 00:24:21.904 "cntlid": 1, 00:24:21.904 "vendor_id": "0x8086", 00:24:21.904 "model_number": "SPDK bdev Controller", 00:24:21.904 "serial_number": "00000000000000000000", 00:24:21.904 "firmware_revision": "24.01.1", 00:24:21.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:21.904 "oacs": { 00:24:21.904 "security": 0, 00:24:21.904 "format": 0, 00:24:21.904 "firmware": 0, 00:24:21.904 "ns_manage": 0 00:24:21.904 }, 00:24:21.904 "multi_ctrlr": true, 00:24:21.904 "ana_reporting": false 00:24:21.904 }, 00:24:21.904 "vs": { 00:24:21.904 "nvme_version": "1.3" 00:24:21.904 }, 00:24:21.904 "ns_data": { 00:24:21.904 "id": 1, 00:24:21.904 "can_share": true 00:24:21.904 } 00:24:21.904 } 00:24:21.904 ], 00:24:21.904 "mp_policy": "active_passive" 00:24:21.904 } 00:24:21.904 } 00:24:21.904 ] 00:24:21.904 11:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.904 11:47:51 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:21.904 11:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.904 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.904 [2024-07-21 11:47:51.290781] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:21.904 [2024-07-21 11:47:51.315180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:22.161 [2024-07-21 11:47:51.340154] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:22.161 11:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.161 11:47:51 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:22.161 11:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.161 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:22.161 [ 00:24:22.161 { 00:24:22.161 "name": "nvme0n1", 00:24:22.161 "aliases": [ 00:24:22.161 "1c827608-9440-44e4-9433-f8e718faa813" 00:24:22.161 ], 00:24:22.161 "product_name": "NVMe disk", 00:24:22.161 "block_size": 512, 00:24:22.161 "num_blocks": 2097152, 00:24:22.161 "uuid": "1c827608-9440-44e4-9433-f8e718faa813", 00:24:22.161 "assigned_rate_limits": { 00:24:22.161 "rw_ios_per_sec": 0, 00:24:22.161 "rw_mbytes_per_sec": 0, 00:24:22.161 "r_mbytes_per_sec": 0, 00:24:22.161 "w_mbytes_per_sec": 0 00:24:22.161 }, 00:24:22.161 "claimed": false, 00:24:22.161 "zoned": false, 00:24:22.161 "supported_io_types": { 00:24:22.161 "read": true, 00:24:22.161 "write": true, 00:24:22.161 "unmap": false, 00:24:22.161 "write_zeroes": true, 00:24:22.161 "flush": true, 00:24:22.161 "reset": true, 00:24:22.161 "compare": true, 00:24:22.161 "compare_and_write": true, 00:24:22.161 "abort": true, 00:24:22.161 "nvme_admin": true, 00:24:22.161 "nvme_io": true 00:24:22.161 }, 00:24:22.161 "memory_domains": [ 00:24:22.161 { 00:24:22.161 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:22.161 "dma_device_type": 0 00:24:22.161 } 00:24:22.161 ], 00:24:22.161 "driver_specific": { 00:24:22.161 "nvme": [ 00:24:22.161 { 00:24:22.161 "trid": { 00:24:22.161 "trtype": "RDMA", 00:24:22.161 "adrfam": "IPv4", 00:24:22.161 "traddr": "192.168.100.8", 00:24:22.161 "trsvcid": "4420", 00:24:22.161 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:22.161 }, 00:24:22.161 "ctrlr_data": { 00:24:22.161 "cntlid": 2, 00:24:22.161 "vendor_id": "0x8086", 00:24:22.161 "model_number": "SPDK bdev Controller", 00:24:22.161 "serial_number": "00000000000000000000", 00:24:22.161 "firmware_revision": "24.01.1", 00:24:22.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:22.161 "oacs": { 00:24:22.161 "security": 0, 00:24:22.161 "format": 0, 00:24:22.161 "firmware": 0, 00:24:22.161 "ns_manage": 0 00:24:22.161 }, 00:24:22.161 "multi_ctrlr": true, 00:24:22.161 "ana_reporting": false 00:24:22.161 }, 00:24:22.161 "vs": { 00:24:22.161 "nvme_version": "1.3" 00:24:22.161 }, 00:24:22.161 "ns_data": { 00:24:22.161 "id": 1, 00:24:22.161 "can_share": true 00:24:22.161 } 00:24:22.161 } 00:24:22.161 ], 00:24:22.161 "mp_policy": "active_passive" 00:24:22.161 } 00:24:22.161 } 00:24:22.161 ] 00:24:22.161 11:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.161 11:47:51 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.161 11:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.161 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:22.161 11:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.161 11:47:51 -- host/async_init.sh@53 -- # mktemp 00:24:22.161 11:47:51 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.GIUl49wOR9 00:24:22.161 11:47:51 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:22.161 11:47:51 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.GIUl49wOR9 00:24:22.161 11:47:51 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:22.161 11:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.161 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:22.161 11:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.161 11:47:51 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:24:22.161 11:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.161 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:22.161 [2024-07-21 11:47:51.423310] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:22.161 11:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.161 11:47:51 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GIUl49wOR9 00:24:22.161 11:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.161 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:22.161 11:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.161 11:47:51 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GIUl49wOR9 00:24:22.161 11:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.161 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:22.161 [2024-07-21 11:47:51.439332] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:22.161 nvme0n1 00:24:22.161 11:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.161 11:47:51 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:22.161 11:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.161 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:22.161 [ 00:24:22.161 { 00:24:22.161 "name": "nvme0n1", 00:24:22.161 "aliases": [ 00:24:22.161 "1c827608-9440-44e4-9433-f8e718faa813" 00:24:22.161 ], 00:24:22.161 "product_name": "NVMe disk", 00:24:22.161 "block_size": 512, 00:24:22.161 "num_blocks": 2097152, 00:24:22.161 "uuid": "1c827608-9440-44e4-9433-f8e718faa813", 00:24:22.161 "assigned_rate_limits": { 00:24:22.161 "rw_ios_per_sec": 0, 00:24:22.161 "rw_mbytes_per_sec": 0, 00:24:22.161 "r_mbytes_per_sec": 0, 00:24:22.161 "w_mbytes_per_sec": 0 00:24:22.161 }, 00:24:22.161 "claimed": false, 00:24:22.161 "zoned": false, 00:24:22.161 "supported_io_types": { 00:24:22.161 "read": true, 00:24:22.161 "write": true, 00:24:22.161 "unmap": false, 00:24:22.161 "write_zeroes": true, 00:24:22.161 "flush": true, 00:24:22.161 "reset": true, 00:24:22.161 "compare": true, 00:24:22.161 "compare_and_write": true, 00:24:22.161 "abort": true, 00:24:22.161 "nvme_admin": true, 00:24:22.162 "nvme_io": true 00:24:22.162 }, 00:24:22.162 "memory_domains": [ 00:24:22.162 { 00:24:22.162 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:22.162 "dma_device_type": 0 00:24:22.162 } 00:24:22.162 ], 00:24:22.162 "driver_specific": { 00:24:22.162 "nvme": [ 00:24:22.162 { 00:24:22.162 "trid": { 00:24:22.162 "trtype": "RDMA", 00:24:22.162 "adrfam": "IPv4", 00:24:22.162 "traddr": "192.168.100.8", 00:24:22.162 "trsvcid": "4421", 00:24:22.162 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:22.162 }, 00:24:22.162 "ctrlr_data": { 00:24:22.162 "cntlid": 3, 00:24:22.162 "vendor_id": "0x8086", 00:24:22.162 "model_number": "SPDK bdev Controller", 00:24:22.162 "serial_number": "00000000000000000000", 00:24:22.162 "firmware_revision": "24.01.1", 00:24:22.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:22.162 "oacs": { 00:24:22.162 "security": 0, 00:24:22.162 "format": 0, 00:24:22.162 "firmware": 0, 00:24:22.162 "ns_manage": 0 00:24:22.162 }, 00:24:22.162 "multi_ctrlr": true, 00:24:22.162 "ana_reporting": false 00:24:22.162 }, 00:24:22.162 "vs": { 00:24:22.162 "nvme_version": "1.3" 00:24:22.162 }, 00:24:22.162 "ns_data": { 00:24:22.162 "id": 1, 00:24:22.162 "can_share": true 00:24:22.162 } 00:24:22.162 } 00:24:22.162 ], 00:24:22.162 "mp_policy": "active_passive" 00:24:22.162 } 00:24:22.162 } 00:24:22.162 ] 00:24:22.162 11:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.162 11:47:51 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.162 11:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.162 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:22.162 11:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.162 11:47:51 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.GIUl49wOR9 00:24:22.162 11:47:51 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:22.162 11:47:51 -- host/async_init.sh@78 -- # nvmftestfini 00:24:22.162 11:47:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:22.162 11:47:51 -- nvmf/common.sh@116 -- # sync 00:24:22.162 11:47:51 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:22.162 11:47:51 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:22.162 11:47:51 -- nvmf/common.sh@119 -- # set +e 00:24:22.162 11:47:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:22.162 11:47:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:22.419 rmmod nvme_rdma 00:24:22.419 rmmod nvme_fabrics 00:24:22.419 11:47:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:22.419 11:47:51 -- nvmf/common.sh@123 -- # set -e 00:24:22.419 11:47:51 -- nvmf/common.sh@124 -- # return 0 00:24:22.419 11:47:51 -- nvmf/common.sh@477 -- # '[' -n 2453094 ']' 00:24:22.419 11:47:51 -- nvmf/common.sh@478 -- # killprocess 2453094 00:24:22.419 11:47:51 -- common/autotest_common.sh@926 -- # '[' -z 2453094 ']' 00:24:22.419 11:47:51 -- common/autotest_common.sh@930 -- # kill -0 2453094 00:24:22.419 11:47:51 -- common/autotest_common.sh@931 -- # uname 00:24:22.419 11:47:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:22.419 11:47:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2453094 00:24:22.419 11:47:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:22.419 11:47:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:22.419 11:47:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2453094' 00:24:22.419 killing process with pid 2453094 00:24:22.419 11:47:51 -- common/autotest_common.sh@945 -- # kill 2453094 00:24:22.419 11:47:51 -- common/autotest_common.sh@950 -- # wait 2453094 00:24:22.676 11:47:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:22.676 11:47:51 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:22.676 00:24:22.676 real 0m10.140s 00:24:22.676 user 0m4.103s 00:24:22.676 sys 0m6.793s 00:24:22.676 11:47:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:22.676 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:22.676 ************************************ 00:24:22.676 END TEST nvmf_async_init 00:24:22.676 ************************************ 00:24:22.676 11:47:51 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:24:22.676 11:47:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:22.676 11:47:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:22.676 11:47:51 -- common/autotest_common.sh@10 -- # set +x 00:24:22.676 ************************************ 00:24:22.676 START TEST dma 00:24:22.676 ************************************ 00:24:22.676 11:47:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:24:22.676 * Looking for test storage... 00:24:22.676 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:22.676 11:47:52 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.676 11:47:52 -- nvmf/common.sh@7 -- # uname -s 00:24:22.676 11:47:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.676 11:47:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.676 11:47:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.676 11:47:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.676 11:47:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.676 11:47:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.676 11:47:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.676 11:47:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.676 11:47:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.676 11:47:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.676 11:47:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:22.676 11:47:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:22.676 11:47:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.676 11:47:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.676 11:47:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.676 11:47:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:22.676 11:47:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.676 11:47:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.676 11:47:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.676 11:47:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.676 11:47:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.676 11:47:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.676 11:47:52 -- paths/export.sh@5 -- # export PATH 00:24:22.676 11:47:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.676 11:47:52 -- nvmf/common.sh@46 -- # : 0 00:24:22.676 11:47:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:22.677 11:47:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:22.677 11:47:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:22.677 11:47:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.677 11:47:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.677 11:47:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:22.677 11:47:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:22.677 11:47:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:22.677 11:47:52 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:24:22.677 11:47:52 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:24:22.677 11:47:52 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:24:22.677 11:47:52 -- host/dma.sh@18 -- # subsystem=0 00:24:22.677 11:47:52 -- host/dma.sh@93 -- # nvmftestinit 00:24:22.677 11:47:52 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:22.677 11:47:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.677 11:47:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:22.677 11:47:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:22.677 11:47:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:22.677 11:47:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.677 11:47:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.677 11:47:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.677 11:47:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:22.677 11:47:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:22.677 11:47:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:22.677 11:47:52 -- common/autotest_common.sh@10 -- # set +x 00:24:30.772 11:48:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:30.772 11:48:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:30.772 11:48:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:30.772 11:48:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:30.772 11:48:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:30.772 11:48:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:30.772 11:48:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:30.772 11:48:00 -- nvmf/common.sh@294 -- # net_devs=() 00:24:30.772 11:48:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:30.772 11:48:00 -- nvmf/common.sh@295 -- # e810=() 00:24:30.772 11:48:00 -- nvmf/common.sh@295 -- # local -ga e810 00:24:30.772 11:48:00 -- nvmf/common.sh@296 -- # x722=() 00:24:30.772 11:48:00 -- nvmf/common.sh@296 -- # local -ga x722 00:24:30.772 11:48:00 -- nvmf/common.sh@297 -- # mlx=() 00:24:30.772 11:48:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:30.772 11:48:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.772 11:48:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.772 11:48:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.772 11:48:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.772 11:48:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.772 11:48:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.772 11:48:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.772 11:48:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.772 11:48:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.772 11:48:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.772 11:48:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.772 11:48:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:30.772 11:48:00 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:30.772 11:48:00 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:30.772 11:48:00 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:30.772 11:48:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:30.772 11:48:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:30.772 11:48:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:30.772 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:30.772 11:48:00 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:30.772 11:48:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:30.772 11:48:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:30.772 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:30.772 11:48:00 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:30.772 11:48:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:30.772 11:48:00 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:30.772 11:48:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.772 11:48:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:30.772 11:48:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.772 11:48:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:30.772 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:30.772 11:48:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.772 11:48:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:30.772 11:48:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.772 11:48:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:30.772 11:48:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.772 11:48:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:30.772 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:30.772 11:48:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.772 11:48:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:30.772 11:48:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:30.772 11:48:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:30.772 11:48:00 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:30.772 11:48:00 -- nvmf/common.sh@57 -- # uname 00:24:30.772 11:48:00 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:30.772 11:48:00 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:30.772 11:48:00 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:30.772 11:48:00 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:30.772 11:48:00 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:30.772 11:48:00 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:30.772 11:48:00 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:30.772 11:48:00 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:30.772 11:48:00 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:30.772 11:48:00 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:30.772 11:48:00 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:30.772 11:48:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:30.772 11:48:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:30.772 11:48:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:30.772 11:48:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:30.772 11:48:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:30.772 11:48:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:30.772 11:48:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:30.772 11:48:00 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:30.772 11:48:00 -- nvmf/common.sh@104 -- # continue 2 00:24:30.772 11:48:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:30.772 11:48:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:30.772 11:48:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:30.772 11:48:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:30.772 11:48:00 -- nvmf/common.sh@104 -- # continue 2 00:24:30.772 11:48:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:30.772 11:48:00 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:30.772 11:48:00 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:30.772 11:48:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:30.772 11:48:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:30.772 11:48:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:30.772 11:48:00 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:30.772 11:48:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:30.772 11:48:00 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:30.772 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:30.773 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:30.773 altname enp217s0f0np0 00:24:30.773 altname ens818f0np0 00:24:30.773 inet 192.168.100.8/24 scope global mlx_0_0 00:24:30.773 valid_lft forever preferred_lft forever 00:24:30.773 11:48:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:30.773 11:48:00 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:30.773 11:48:00 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:30.773 11:48:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:30.773 11:48:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:30.773 11:48:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:30.773 11:48:00 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:30.773 11:48:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:30.773 11:48:00 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:31.029 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:31.029 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:31.029 altname enp217s0f1np1 00:24:31.029 altname ens818f1np1 00:24:31.029 inet 192.168.100.9/24 scope global mlx_0_1 00:24:31.029 valid_lft forever preferred_lft forever 00:24:31.029 11:48:00 -- nvmf/common.sh@410 -- # return 0 00:24:31.029 11:48:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:31.029 11:48:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:31.029 11:48:00 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:31.029 11:48:00 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:31.029 11:48:00 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:31.029 11:48:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:31.029 11:48:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:31.029 11:48:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:31.029 11:48:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:31.029 11:48:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:31.029 11:48:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:31.029 11:48:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:31.029 11:48:00 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:31.030 11:48:00 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:31.030 11:48:00 -- nvmf/common.sh@104 -- # continue 2 00:24:31.030 11:48:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:31.030 11:48:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:31.030 11:48:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:31.030 11:48:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:31.030 11:48:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:31.030 11:48:00 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:31.030 11:48:00 -- nvmf/common.sh@104 -- # continue 2 00:24:31.030 11:48:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:31.030 11:48:00 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:31.030 11:48:00 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:31.030 11:48:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:31.030 11:48:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:31.030 11:48:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:31.030 11:48:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:31.030 11:48:00 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:31.030 11:48:00 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:31.030 11:48:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:31.030 11:48:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:31.030 11:48:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:31.030 11:48:00 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:31.030 192.168.100.9' 00:24:31.030 11:48:00 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:31.030 192.168.100.9' 00:24:31.030 11:48:00 -- nvmf/common.sh@445 -- # head -n 1 00:24:31.030 11:48:00 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:31.030 11:48:00 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:31.030 192.168.100.9' 00:24:31.030 11:48:00 -- nvmf/common.sh@446 -- # tail -n +2 00:24:31.030 11:48:00 -- nvmf/common.sh@446 -- # head -n 1 00:24:31.030 11:48:00 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:31.030 11:48:00 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:31.030 11:48:00 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:31.030 11:48:00 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:31.030 11:48:00 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:31.030 11:48:00 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:31.030 11:48:00 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:24:31.030 11:48:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:31.030 11:48:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:31.030 11:48:00 -- common/autotest_common.sh@10 -- # set +x 00:24:31.030 11:48:00 -- nvmf/common.sh@469 -- # nvmfpid=2457338 00:24:31.030 11:48:00 -- nvmf/common.sh@470 -- # waitforlisten 2457338 00:24:31.030 11:48:00 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:31.030 11:48:00 -- common/autotest_common.sh@819 -- # '[' -z 2457338 ']' 00:24:31.030 11:48:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.030 11:48:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:31.030 11:48:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.030 11:48:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:31.030 11:48:00 -- common/autotest_common.sh@10 -- # set +x 00:24:31.030 [2024-07-21 11:48:00.361697] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:31.030 [2024-07-21 11:48:00.361750] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.030 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.030 [2024-07-21 11:48:00.448886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:31.287 [2024-07-21 11:48:00.487974] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:31.287 [2024-07-21 11:48:00.488084] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.287 [2024-07-21 11:48:00.488096] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.287 [2024-07-21 11:48:00.488105] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.287 [2024-07-21 11:48:00.488156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.287 [2024-07-21 11:48:00.488159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.850 11:48:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:31.850 11:48:01 -- common/autotest_common.sh@852 -- # return 0 00:24:31.850 11:48:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:31.850 11:48:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:31.851 11:48:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.851 11:48:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.851 11:48:01 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:31.851 11:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:31.851 11:48:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.851 [2024-07-21 11:48:01.231896] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e09e80/0x1e0e370) succeed. 00:24:31.851 [2024-07-21 11:48:01.240725] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e0b380/0x1e4fa00) succeed. 00:24:32.108 11:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.108 11:48:01 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:24:32.108 11:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.108 11:48:01 -- common/autotest_common.sh@10 -- # set +x 00:24:32.108 Malloc0 00:24:32.109 11:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.109 11:48:01 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:32.109 11:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.109 11:48:01 -- common/autotest_common.sh@10 -- # set +x 00:24:32.109 11:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.109 11:48:01 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:24:32.109 11:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.109 11:48:01 -- common/autotest_common.sh@10 -- # set +x 00:24:32.109 11:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.109 11:48:01 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:32.109 11:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.109 11:48:01 -- common/autotest_common.sh@10 -- # set +x 00:24:32.109 [2024-07-21 11:48:01.402715] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:32.109 11:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.109 11:48:01 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate -r /var/tmp/dma.sock 00:24:32.109 11:48:01 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:24:32.109 11:48:01 -- nvmf/common.sh@520 -- # config=() 00:24:32.109 11:48:01 -- nvmf/common.sh@520 -- # local subsystem config 00:24:32.109 11:48:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:32.109 11:48:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:32.109 { 00:24:32.109 "params": { 00:24:32.109 "name": "Nvme$subsystem", 00:24:32.109 "trtype": "$TEST_TRANSPORT", 00:24:32.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.109 "adrfam": "ipv4", 00:24:32.109 "trsvcid": "$NVMF_PORT", 00:24:32.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.109 "hdgst": ${hdgst:-false}, 00:24:32.109 "ddgst": ${ddgst:-false} 00:24:32.109 }, 00:24:32.109 "method": "bdev_nvme_attach_controller" 00:24:32.109 } 00:24:32.109 EOF 00:24:32.109 )") 00:24:32.109 11:48:01 -- nvmf/common.sh@542 -- # cat 00:24:32.109 11:48:01 -- nvmf/common.sh@544 -- # jq . 00:24:32.109 11:48:01 -- nvmf/common.sh@545 -- # IFS=, 00:24:32.109 11:48:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:32.109 "params": { 00:24:32.109 "name": "Nvme0", 00:24:32.109 "trtype": "rdma", 00:24:32.109 "traddr": "192.168.100.8", 00:24:32.109 "adrfam": "ipv4", 00:24:32.109 "trsvcid": "4420", 00:24:32.109 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:32.109 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:32.109 "hdgst": false, 00:24:32.109 "ddgst": false 00:24:32.109 }, 00:24:32.109 "method": "bdev_nvme_attach_controller" 00:24:32.109 }' 00:24:32.109 [2024-07-21 11:48:01.449406] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:32.109 [2024-07-21 11:48:01.449454] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2457694 ] 00:24:32.109 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.366 [2024-07-21 11:48:01.531182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:32.366 [2024-07-21 11:48:01.568580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:32.366 [2024-07-21 11:48:01.568583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.622 bdev Nvme0n1 reports 1 memory domains 00:24:37.622 bdev Nvme0n1 supports RDMA memory domain 00:24:37.622 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:37.622 ========================================================================== 00:24:37.622 Latency [us] 00:24:37.622 IOPS MiB/s Average min max 00:24:37.622 Core 2: 22110.21 86.37 722.96 239.51 8978.72 00:24:37.622 Core 3: 22347.99 87.30 715.22 237.06 8707.78 00:24:37.622 ========================================================================== 00:24:37.622 Total : 44458.20 173.66 719.07 237.06 8978.72 00:24:37.622 00:24:37.622 Total operations: 222313, translate 222313 pull_push 0 memzero 0 00:24:37.622 11:48:06 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push -r /var/tmp/dma.sock 00:24:37.622 11:48:06 -- host/dma.sh@107 -- # gen_malloc_json 00:24:37.622 11:48:06 -- host/dma.sh@21 -- # jq . 00:24:37.622 [2024-07-21 11:48:06.986292] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:37.622 [2024-07-21 11:48:06.986347] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2458959 ] 00:24:37.622 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.880 [2024-07-21 11:48:07.068606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:37.880 [2024-07-21 11:48:07.105224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:37.880 [2024-07-21 11:48:07.105227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.139 bdev Malloc0 reports 1 memory domains 00:24:43.139 bdev Malloc0 doesn't support RDMA memory domain 00:24:43.139 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:43.139 ========================================================================== 00:24:43.139 Latency [us] 00:24:43.139 IOPS MiB/s Average min max 00:24:43.139 Core 2: 14757.20 57.65 1083.46 453.04 1956.58 00:24:43.139 Core 3: 15094.67 58.96 1059.23 459.63 1847.45 00:24:43.140 ========================================================================== 00:24:43.140 Total : 29851.86 116.61 1071.21 453.04 1956.58 00:24:43.140 00:24:43.140 Total operations: 149316, translate 0 pull_push 597264 memzero 0 00:24:43.140 11:48:12 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero -r /var/tmp/dma.sock 00:24:43.140 11:48:12 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:24:43.140 11:48:12 -- host/dma.sh@48 -- # local subsystem=0 00:24:43.140 11:48:12 -- host/dma.sh@50 -- # jq . 00:24:43.140 Ignoring -M option 00:24:43.140 [2024-07-21 11:48:12.439890] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:43.140 [2024-07-21 11:48:12.439946] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2460027 ] 00:24:43.140 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.140 [2024-07-21 11:48:12.522799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:43.140 [2024-07-21 11:48:12.559702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:43.140 [2024-07-21 11:48:12.559705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.398 [2024-07-21 11:48:12.765995] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:24:48.699 [2024-07-21 11:48:17.794485] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:24:48.699 bdev 47751bc0-ce4f-4079-a2db-2f6db111ce0d reports 1 memory domains 00:24:48.699 bdev 47751bc0-ce4f-4079-a2db-2f6db111ce0d supports RDMA memory domain 00:24:48.699 Initialization complete, running randread IO for 5 sec on 2 cores 00:24:48.699 ========================================================================== 00:24:48.699 Latency [us] 00:24:48.699 IOPS MiB/s Average min max 00:24:48.699 Core 2: 72814.81 284.43 218.84 88.07 3026.42 00:24:48.699 Core 3: 70332.27 274.74 226.47 70.52 1587.46 00:24:48.699 ========================================================================== 00:24:48.699 Total : 143147.08 559.17 222.59 70.52 3026.42 00:24:48.699 00:24:48.699 Total operations: 715810, translate 0 pull_push 0 memzero 715810 00:24:48.699 11:48:17 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:24:48.699 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.699 [2024-07-21 11:48:18.097128] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:51.229 Initializing NVMe Controllers 00:24:51.229 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:24:51.229 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:24:51.229 Initialization complete. Launching workers. 00:24:51.229 ======================================================== 00:24:51.229 Latency(us) 00:24:51.229 Device Information : IOPS MiB/s Average min max 00:24:51.229 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7972.16 6092.89 8871.19 00:24:51.229 ======================================================== 00:24:51.229 Total : 2016.00 7.88 7972.16 6092.89 8871.19 00:24:51.229 00:24:51.229 11:48:20 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate -r /var/tmp/dma.sock 00:24:51.229 11:48:20 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:24:51.229 11:48:20 -- host/dma.sh@48 -- # local subsystem=0 00:24:51.229 11:48:20 -- host/dma.sh@50 -- # jq . 00:24:51.229 [2024-07-21 11:48:20.430345] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:51.229 [2024-07-21 11:48:20.430394] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2461373 ] 00:24:51.229 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.229 [2024-07-21 11:48:20.511604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:51.229 [2024-07-21 11:48:20.548682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:51.229 [2024-07-21 11:48:20.548686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.487 [2024-07-21 11:48:20.746186] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:24:56.749 [2024-07-21 11:48:25.777392] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:24:56.749 bdev 6df4a83c-fa9e-4c35-b9ed-8971e0849768 reports 1 memory domains 00:24:56.749 bdev 6df4a83c-fa9e-4c35-b9ed-8971e0849768 supports RDMA memory domain 00:24:56.749 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:56.749 ========================================================================== 00:24:56.749 Latency [us] 00:24:56.749 IOPS MiB/s Average min max 00:24:56.749 Core 2: 19284.87 75.33 828.92 37.89 10980.07 00:24:56.749 Core 3: 19691.94 76.92 811.78 19.13 11173.52 00:24:56.749 ========================================================================== 00:24:56.749 Total : 38976.81 152.25 820.26 19.13 11173.52 00:24:56.749 00:24:56.749 Total operations: 194946, translate 194842 pull_push 0 memzero 104 00:24:56.749 11:48:25 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:24:56.749 11:48:25 -- host/dma.sh@120 -- # nvmftestfini 00:24:56.749 11:48:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:56.749 11:48:25 -- nvmf/common.sh@116 -- # sync 00:24:56.749 11:48:25 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:56.749 11:48:25 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:56.749 11:48:25 -- nvmf/common.sh@119 -- # set +e 00:24:56.749 11:48:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:56.749 11:48:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:56.749 rmmod nvme_rdma 00:24:56.749 rmmod nvme_fabrics 00:24:56.749 11:48:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:56.749 11:48:26 -- nvmf/common.sh@123 -- # set -e 00:24:56.749 11:48:26 -- nvmf/common.sh@124 -- # return 0 00:24:56.749 11:48:26 -- nvmf/common.sh@477 -- # '[' -n 2457338 ']' 00:24:56.749 11:48:26 -- nvmf/common.sh@478 -- # killprocess 2457338 00:24:56.749 11:48:26 -- common/autotest_common.sh@926 -- # '[' -z 2457338 ']' 00:24:56.749 11:48:26 -- common/autotest_common.sh@930 -- # kill -0 2457338 00:24:56.749 11:48:26 -- common/autotest_common.sh@931 -- # uname 00:24:56.749 11:48:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:56.749 11:48:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2457338 00:24:56.749 11:48:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:56.749 11:48:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:56.749 11:48:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2457338' 00:24:56.749 killing process with pid 2457338 00:24:56.749 11:48:26 -- common/autotest_common.sh@945 -- # kill 2457338 00:24:56.749 11:48:26 -- common/autotest_common.sh@950 -- # wait 2457338 00:24:57.007 11:48:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:57.007 11:48:26 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:57.007 00:24:57.007 real 0m34.411s 00:24:57.007 user 1m36.636s 00:24:57.007 sys 0m7.459s 00:24:57.007 11:48:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:57.007 11:48:26 -- common/autotest_common.sh@10 -- # set +x 00:24:57.007 ************************************ 00:24:57.007 END TEST dma 00:24:57.007 ************************************ 00:24:57.007 11:48:26 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:24:57.007 11:48:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:57.007 11:48:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:57.007 11:48:26 -- common/autotest_common.sh@10 -- # set +x 00:24:57.007 ************************************ 00:24:57.007 START TEST nvmf_identify 00:24:57.007 ************************************ 00:24:57.007 11:48:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:24:57.265 * Looking for test storage... 00:24:57.265 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:57.265 11:48:26 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.265 11:48:26 -- nvmf/common.sh@7 -- # uname -s 00:24:57.265 11:48:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.265 11:48:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.265 11:48:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.265 11:48:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.265 11:48:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.265 11:48:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.265 11:48:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.265 11:48:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.265 11:48:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.265 11:48:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.265 11:48:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:57.265 11:48:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:57.265 11:48:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.265 11:48:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.265 11:48:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.265 11:48:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:57.265 11:48:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.265 11:48:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.265 11:48:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.265 11:48:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.265 11:48:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.265 11:48:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.265 11:48:26 -- paths/export.sh@5 -- # export PATH 00:24:57.265 11:48:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.265 11:48:26 -- nvmf/common.sh@46 -- # : 0 00:24:57.265 11:48:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:57.265 11:48:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:57.265 11:48:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:57.265 11:48:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.265 11:48:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.265 11:48:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:57.265 11:48:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:57.265 11:48:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:57.265 11:48:26 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:57.265 11:48:26 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:57.265 11:48:26 -- host/identify.sh@14 -- # nvmftestinit 00:24:57.265 11:48:26 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:57.265 11:48:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.265 11:48:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:57.265 11:48:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:57.265 11:48:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:57.265 11:48:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.265 11:48:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.265 11:48:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.265 11:48:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:57.265 11:48:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:57.265 11:48:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:57.265 11:48:26 -- common/autotest_common.sh@10 -- # set +x 00:25:05.379 11:48:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:05.379 11:48:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:05.379 11:48:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:05.379 11:48:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:05.379 11:48:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:05.379 11:48:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:05.379 11:48:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:05.379 11:48:34 -- nvmf/common.sh@294 -- # net_devs=() 00:25:05.379 11:48:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:05.379 11:48:34 -- nvmf/common.sh@295 -- # e810=() 00:25:05.379 11:48:34 -- nvmf/common.sh@295 -- # local -ga e810 00:25:05.379 11:48:34 -- nvmf/common.sh@296 -- # x722=() 00:25:05.379 11:48:34 -- nvmf/common.sh@296 -- # local -ga x722 00:25:05.379 11:48:34 -- nvmf/common.sh@297 -- # mlx=() 00:25:05.379 11:48:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:05.379 11:48:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.379 11:48:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.379 11:48:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.379 11:48:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.379 11:48:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.379 11:48:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.379 11:48:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.379 11:48:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.379 11:48:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.379 11:48:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.379 11:48:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.379 11:48:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:05.379 11:48:34 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:25:05.379 11:48:34 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:25:05.379 11:48:34 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:25:05.379 11:48:34 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:25:05.379 11:48:34 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:25:05.379 11:48:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:05.379 11:48:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:05.379 11:48:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:05.379 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:05.379 11:48:34 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:05.379 11:48:34 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:05.379 11:48:34 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:05.379 11:48:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:05.379 11:48:34 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:05.379 11:48:34 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:05.379 11:48:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:05.379 11:48:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:05.379 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:05.379 11:48:34 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:05.379 11:48:34 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:05.379 11:48:34 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:05.379 11:48:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:05.379 11:48:34 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:05.379 11:48:34 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:05.379 11:48:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:05.379 11:48:34 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:25:05.379 11:48:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:05.379 11:48:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.379 11:48:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:05.379 11:48:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.379 11:48:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:05.379 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:05.379 11:48:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.379 11:48:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:05.379 11:48:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.379 11:48:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:05.379 11:48:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.379 11:48:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:05.379 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:05.379 11:48:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.379 11:48:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:05.379 11:48:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:05.379 11:48:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:05.379 11:48:34 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:25:05.379 11:48:34 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:25:05.379 11:48:34 -- nvmf/common.sh@408 -- # rdma_device_init 00:25:05.379 11:48:34 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:25:05.379 11:48:34 -- nvmf/common.sh@57 -- # uname 00:25:05.379 11:48:34 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:25:05.379 11:48:34 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:25:05.379 11:48:34 -- nvmf/common.sh@62 -- # modprobe ib_core 00:25:05.379 11:48:34 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:25:05.379 11:48:34 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:25:05.379 11:48:34 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:25:05.379 11:48:34 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:25:05.379 11:48:34 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:25:05.379 11:48:34 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:25:05.379 11:48:34 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:05.379 11:48:34 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:25:05.379 11:48:34 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:05.379 11:48:34 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:05.379 11:48:34 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:05.379 11:48:34 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:05.379 11:48:34 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:05.379 11:48:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:05.379 11:48:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.379 11:48:34 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:05.379 11:48:34 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:05.380 11:48:34 -- nvmf/common.sh@104 -- # continue 2 00:25:05.380 11:48:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:05.380 11:48:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.380 11:48:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:05.380 11:48:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.380 11:48:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:05.380 11:48:34 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:05.380 11:48:34 -- nvmf/common.sh@104 -- # continue 2 00:25:05.380 11:48:34 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:05.380 11:48:34 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:25:05.380 11:48:34 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:05.380 11:48:34 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:05.380 11:48:34 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:05.380 11:48:34 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:05.380 11:48:34 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:25:05.380 11:48:34 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:25:05.380 11:48:34 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:25:05.380 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:05.380 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:05.380 altname enp217s0f0np0 00:25:05.380 altname ens818f0np0 00:25:05.380 inet 192.168.100.8/24 scope global mlx_0_0 00:25:05.380 valid_lft forever preferred_lft forever 00:25:05.380 11:48:34 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:05.380 11:48:34 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:25:05.380 11:48:34 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:05.380 11:48:34 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:05.380 11:48:34 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:05.380 11:48:34 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:05.380 11:48:34 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:25:05.380 11:48:34 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:25:05.380 11:48:34 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:25:05.380 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:05.380 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:05.380 altname enp217s0f1np1 00:25:05.380 altname ens818f1np1 00:25:05.380 inet 192.168.100.9/24 scope global mlx_0_1 00:25:05.380 valid_lft forever preferred_lft forever 00:25:05.380 11:48:34 -- nvmf/common.sh@410 -- # return 0 00:25:05.380 11:48:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:05.380 11:48:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:05.380 11:48:34 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:25:05.380 11:48:34 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:25:05.380 11:48:34 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:25:05.380 11:48:34 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:05.380 11:48:34 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:05.380 11:48:34 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:05.380 11:48:34 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:05.380 11:48:34 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:05.380 11:48:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:05.380 11:48:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.380 11:48:34 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:05.380 11:48:34 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:05.380 11:48:34 -- nvmf/common.sh@104 -- # continue 2 00:25:05.380 11:48:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:05.380 11:48:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.380 11:48:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:05.380 11:48:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.380 11:48:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:05.380 11:48:34 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:05.380 11:48:34 -- nvmf/common.sh@104 -- # continue 2 00:25:05.380 11:48:34 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:05.380 11:48:34 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:25:05.380 11:48:34 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:05.380 11:48:34 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:05.380 11:48:34 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:05.380 11:48:34 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:05.380 11:48:34 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:05.380 11:48:34 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:25:05.380 11:48:34 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:05.380 11:48:34 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:05.380 11:48:34 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:05.380 11:48:34 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:05.380 11:48:34 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:25:05.380 192.168.100.9' 00:25:05.380 11:48:34 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:25:05.380 192.168.100.9' 00:25:05.380 11:48:34 -- nvmf/common.sh@445 -- # head -n 1 00:25:05.380 11:48:34 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:05.380 11:48:34 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:05.380 192.168.100.9' 00:25:05.637 11:48:34 -- nvmf/common.sh@446 -- # tail -n +2 00:25:05.637 11:48:34 -- nvmf/common.sh@446 -- # head -n 1 00:25:05.637 11:48:34 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:05.637 11:48:34 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:25:05.637 11:48:34 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:05.637 11:48:34 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:25:05.637 11:48:34 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:25:05.637 11:48:34 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:25:05.637 11:48:34 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:05.637 11:48:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:05.637 11:48:34 -- common/autotest_common.sh@10 -- # set +x 00:25:05.637 11:48:34 -- host/identify.sh@19 -- # nvmfpid=2466350 00:25:05.637 11:48:34 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:05.637 11:48:34 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:05.637 11:48:34 -- host/identify.sh@23 -- # waitforlisten 2466350 00:25:05.637 11:48:34 -- common/autotest_common.sh@819 -- # '[' -z 2466350 ']' 00:25:05.637 11:48:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.637 11:48:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:05.637 11:48:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.637 11:48:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:05.637 11:48:34 -- common/autotest_common.sh@10 -- # set +x 00:25:05.637 [2024-07-21 11:48:34.886874] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:05.637 [2024-07-21 11:48:34.886925] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.637 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.637 [2024-07-21 11:48:34.973393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:05.637 [2024-07-21 11:48:35.011072] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:05.637 [2024-07-21 11:48:35.011181] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.637 [2024-07-21 11:48:35.011190] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.637 [2024-07-21 11:48:35.011198] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.637 [2024-07-21 11:48:35.011289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.637 [2024-07-21 11:48:35.011387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.637 [2024-07-21 11:48:35.011475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:05.637 [2024-07-21 11:48:35.011477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.567 11:48:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:06.567 11:48:35 -- common/autotest_common.sh@852 -- # return 0 00:25:06.567 11:48:35 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:06.567 11:48:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.567 11:48:35 -- common/autotest_common.sh@10 -- # set +x 00:25:06.567 [2024-07-21 11:48:35.715184] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6d44b0/0x6d89a0) succeed. 00:25:06.567 [2024-07-21 11:48:35.725666] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6d5aa0/0x71a030) succeed. 00:25:06.567 11:48:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.567 11:48:35 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:06.567 11:48:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:06.567 11:48:35 -- common/autotest_common.sh@10 -- # set +x 00:25:06.567 11:48:35 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:06.567 11:48:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.567 11:48:35 -- common/autotest_common.sh@10 -- # set +x 00:25:06.567 Malloc0 00:25:06.567 11:48:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.567 11:48:35 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:06.567 11:48:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.567 11:48:35 -- common/autotest_common.sh@10 -- # set +x 00:25:06.567 11:48:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.567 11:48:35 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:06.567 11:48:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.567 11:48:35 -- common/autotest_common.sh@10 -- # set +x 00:25:06.567 11:48:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.567 11:48:35 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:06.567 11:48:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.567 11:48:35 -- common/autotest_common.sh@10 -- # set +x 00:25:06.567 [2024-07-21 11:48:35.936544] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:06.567 11:48:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.567 11:48:35 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:06.567 11:48:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.567 11:48:35 -- common/autotest_common.sh@10 -- # set +x 00:25:06.567 11:48:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.567 11:48:35 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:06.567 11:48:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.567 11:48:35 -- common/autotest_common.sh@10 -- # set +x 00:25:06.567 [2024-07-21 11:48:35.952264] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:06.567 [ 00:25:06.567 { 00:25:06.567 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:06.567 "subtype": "Discovery", 00:25:06.567 "listen_addresses": [ 00:25:06.567 { 00:25:06.567 "transport": "RDMA", 00:25:06.567 "trtype": "RDMA", 00:25:06.567 "adrfam": "IPv4", 00:25:06.567 "traddr": "192.168.100.8", 00:25:06.567 "trsvcid": "4420" 00:25:06.567 } 00:25:06.567 ], 00:25:06.567 "allow_any_host": true, 00:25:06.567 "hosts": [] 00:25:06.567 }, 00:25:06.567 { 00:25:06.567 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:06.567 "subtype": "NVMe", 00:25:06.567 "listen_addresses": [ 00:25:06.567 { 00:25:06.567 "transport": "RDMA", 00:25:06.567 "trtype": "RDMA", 00:25:06.567 "adrfam": "IPv4", 00:25:06.567 "traddr": "192.168.100.8", 00:25:06.567 "trsvcid": "4420" 00:25:06.567 } 00:25:06.567 ], 00:25:06.567 "allow_any_host": true, 00:25:06.567 "hosts": [], 00:25:06.567 "serial_number": "SPDK00000000000001", 00:25:06.567 "model_number": "SPDK bdev Controller", 00:25:06.567 "max_namespaces": 32, 00:25:06.567 "min_cntlid": 1, 00:25:06.567 "max_cntlid": 65519, 00:25:06.567 "namespaces": [ 00:25:06.567 { 00:25:06.567 "nsid": 1, 00:25:06.567 "bdev_name": "Malloc0", 00:25:06.567 "name": "Malloc0", 00:25:06.567 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:06.567 "eui64": "ABCDEF0123456789", 00:25:06.567 "uuid": "c3e612a3-62ca-4aa1-ae51-95e23a451650" 00:25:06.567 } 00:25:06.567 ] 00:25:06.567 } 00:25:06.567 ] 00:25:06.567 11:48:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.567 11:48:35 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:06.852 [2024-07-21 11:48:35.995859] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:06.852 [2024-07-21 11:48:35.995914] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466433 ] 00:25:06.852 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.852 [2024-07-21 11:48:36.044883] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:06.852 [2024-07-21 11:48:36.044953] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:25:06.852 [2024-07-21 11:48:36.044970] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:25:06.852 [2024-07-21 11:48:36.044975] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:25:06.852 [2024-07-21 11:48:36.045005] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:06.852 [2024-07-21 11:48:36.057158] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:25:06.852 [2024-07-21 11:48:36.067228] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:06.852 [2024-07-21 11:48:36.067238] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:25:06.852 [2024-07-21 11:48:36.067245] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067252] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067259] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067265] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067272] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067278] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067287] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067294] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067300] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067306] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067313] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067319] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067325] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067332] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067338] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067344] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067351] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067357] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067363] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067370] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067376] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067382] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067389] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067395] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067401] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067408] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067414] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067420] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067427] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067433] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067439] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067445] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:25:06.852 [2024-07-21 11:48:36.067451] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:06.852 [2024-07-21 11:48:36.067455] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:25:06.852 [2024-07-21 11:48:36.067473] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.067486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x183d00 00:25:06.852 [2024-07-21 11:48:36.072633] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.852 [2024-07-21 11:48:36.072642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:06.852 [2024-07-21 11:48:36.072652] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.072659] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:06.852 [2024-07-21 11:48:36.072667] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:06.852 [2024-07-21 11:48:36.072673] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:06.852 [2024-07-21 11:48:36.072685] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.072694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.852 [2024-07-21 11:48:36.072711] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.852 [2024-07-21 11:48:36.072717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:25:06.852 [2024-07-21 11:48:36.072723] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:06.852 [2024-07-21 11:48:36.072730] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.072736] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:06.852 [2024-07-21 11:48:36.072744] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.072752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.852 [2024-07-21 11:48:36.072769] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.852 [2024-07-21 11:48:36.072775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:25:06.852 [2024-07-21 11:48:36.072782] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:06.852 [2024-07-21 11:48:36.072788] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.072795] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:06.852 [2024-07-21 11:48:36.072803] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.852 [2024-07-21 11:48:36.072810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.852 [2024-07-21 11:48:36.072830] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.852 [2024-07-21 11:48:36.072836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.853 [2024-07-21 11:48:36.072842] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:06.853 [2024-07-21 11:48:36.072848] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.072857] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.072865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.853 [2024-07-21 11:48:36.072882] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.853 [2024-07-21 11:48:36.072888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.853 [2024-07-21 11:48:36.072896] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:06.853 [2024-07-21 11:48:36.072902] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:06.853 [2024-07-21 11:48:36.072908] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.072915] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:06.853 [2024-07-21 11:48:36.073022] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:06.853 [2024-07-21 11:48:36.073028] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:06.853 [2024-07-21 11:48:36.073037] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.853 [2024-07-21 11:48:36.073065] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.853 [2024-07-21 11:48:36.073070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.853 [2024-07-21 11:48:36.073077] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:06.853 [2024-07-21 11:48:36.073083] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073091] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.853 [2024-07-21 11:48:36.073119] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.853 [2024-07-21 11:48:36.073124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:06.853 [2024-07-21 11:48:36.073130] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:06.853 [2024-07-21 11:48:36.073136] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:06.853 [2024-07-21 11:48:36.073143] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073149] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:06.853 [2024-07-21 11:48:36.073158] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:06.853 [2024-07-21 11:48:36.073167] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:25:06.853 [2024-07-21 11:48:36.073210] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.853 [2024-07-21 11:48:36.073216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.853 [2024-07-21 11:48:36.073225] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:06.853 [2024-07-21 11:48:36.073231] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:06.853 [2024-07-21 11:48:36.073238] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:06.853 [2024-07-21 11:48:36.073245] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:06.853 [2024-07-21 11:48:36.073251] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:06.853 [2024-07-21 11:48:36.073257] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:06.853 [2024-07-21 11:48:36.073263] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073273] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:06.853 [2024-07-21 11:48:36.073280] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.853 [2024-07-21 11:48:36.073315] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.853 [2024-07-21 11:48:36.073321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.853 [2024-07-21 11:48:36.073330] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.853 [2024-07-21 11:48:36.073344] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.853 [2024-07-21 11:48:36.073358] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.853 [2024-07-21 11:48:36.073372] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.853 [2024-07-21 11:48:36.073386] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:06.853 [2024-07-21 11:48:36.073392] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073402] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:06.853 [2024-07-21 11:48:36.073410] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073417] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.853 [2024-07-21 11:48:36.073433] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.853 [2024-07-21 11:48:36.073439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:25:06.853 [2024-07-21 11:48:36.073445] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:06.853 [2024-07-21 11:48:36.073452] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:06.853 [2024-07-21 11:48:36.073461] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073470] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:25:06.853 [2024-07-21 11:48:36.073507] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.853 [2024-07-21 11:48:36.073513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.853 [2024-07-21 11:48:36.073521] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073530] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:06.853 [2024-07-21 11:48:36.073552] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x183d00 00:25:06.853 [2024-07-21 11:48:36.073568] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.853 [2024-07-21 11:48:36.073598] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.853 [2024-07-21 11:48:36.073604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.853 [2024-07-21 11:48:36.073615] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183d00 00:25:06.853 [2024-07-21 11:48:36.073634] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073641] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.853 [2024-07-21 11:48:36.073647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.853 [2024-07-21 11:48:36.073653] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073659] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.853 [2024-07-21 11:48:36.073665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.853 [2024-07-21 11:48:36.073674] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183d00 00:25:06.853 [2024-07-21 11:48:36.073688] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:25:06.853 [2024-07-21 11:48:36.073704] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.853 [2024-07-21 11:48:36.073710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.853 [2024-07-21 11:48:36.073721] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:25:06.853 ===================================================== 00:25:06.853 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:06.853 ===================================================== 00:25:06.853 Controller Capabilities/Features 00:25:06.853 ================================ 00:25:06.853 Vendor ID: 0000 00:25:06.853 Subsystem Vendor ID: 0000 00:25:06.853 Serial Number: .................... 00:25:06.853 Model Number: ........................................ 00:25:06.853 Firmware Version: 24.01.1 00:25:06.853 Recommended Arb Burst: 0 00:25:06.853 IEEE OUI Identifier: 00 00 00 00:25:06.853 Multi-path I/O 00:25:06.853 May have multiple subsystem ports: No 00:25:06.853 May have multiple controllers: No 00:25:06.853 Associated with SR-IOV VF: No 00:25:06.853 Max Data Transfer Size: 131072 00:25:06.853 Max Number of Namespaces: 0 00:25:06.853 Max Number of I/O Queues: 1024 00:25:06.853 NVMe Specification Version (VS): 1.3 00:25:06.853 NVMe Specification Version (Identify): 1.3 00:25:06.853 Maximum Queue Entries: 128 00:25:06.853 Contiguous Queues Required: Yes 00:25:06.853 Arbitration Mechanisms Supported 00:25:06.853 Weighted Round Robin: Not Supported 00:25:06.853 Vendor Specific: Not Supported 00:25:06.853 Reset Timeout: 15000 ms 00:25:06.853 Doorbell Stride: 4 bytes 00:25:06.853 NVM Subsystem Reset: Not Supported 00:25:06.853 Command Sets Supported 00:25:06.853 NVM Command Set: Supported 00:25:06.853 Boot Partition: Not Supported 00:25:06.853 Memory Page Size Minimum: 4096 bytes 00:25:06.853 Memory Page Size Maximum: 4096 bytes 00:25:06.853 Persistent Memory Region: Not Supported 00:25:06.853 Optional Asynchronous Events Supported 00:25:06.853 Namespace Attribute Notices: Not Supported 00:25:06.853 Firmware Activation Notices: Not Supported 00:25:06.853 ANA Change Notices: Not Supported 00:25:06.853 PLE Aggregate Log Change Notices: Not Supported 00:25:06.853 LBA Status Info Alert Notices: Not Supported 00:25:06.853 EGE Aggregate Log Change Notices: Not Supported 00:25:06.853 Normal NVM Subsystem Shutdown event: Not Supported 00:25:06.853 Zone Descriptor Change Notices: Not Supported 00:25:06.853 Discovery Log Change Notices: Supported 00:25:06.853 Controller Attributes 00:25:06.853 128-bit Host Identifier: Not Supported 00:25:06.853 Non-Operational Permissive Mode: Not Supported 00:25:06.853 NVM Sets: Not Supported 00:25:06.853 Read Recovery Levels: Not Supported 00:25:06.853 Endurance Groups: Not Supported 00:25:06.853 Predictable Latency Mode: Not Supported 00:25:06.853 Traffic Based Keep ALive: Not Supported 00:25:06.853 Namespace Granularity: Not Supported 00:25:06.853 SQ Associations: Not Supported 00:25:06.853 UUID List: Not Supported 00:25:06.853 Multi-Domain Subsystem: Not Supported 00:25:06.853 Fixed Capacity Management: Not Supported 00:25:06.853 Variable Capacity Management: Not Supported 00:25:06.853 Delete Endurance Group: Not Supported 00:25:06.853 Delete NVM Set: Not Supported 00:25:06.853 Extended LBA Formats Supported: Not Supported 00:25:06.853 Flexible Data Placement Supported: Not Supported 00:25:06.853 00:25:06.853 Controller Memory Buffer Support 00:25:06.853 ================================ 00:25:06.853 Supported: No 00:25:06.853 00:25:06.853 Persistent Memory Region Support 00:25:06.853 ================================ 00:25:06.853 Supported: No 00:25:06.853 00:25:06.853 Admin Command Set Attributes 00:25:06.853 ============================ 00:25:06.853 Security Send/Receive: Not Supported 00:25:06.853 Format NVM: Not Supported 00:25:06.853 Firmware Activate/Download: Not Supported 00:25:06.853 Namespace Management: Not Supported 00:25:06.853 Device Self-Test: Not Supported 00:25:06.853 Directives: Not Supported 00:25:06.853 NVMe-MI: Not Supported 00:25:06.853 Virtualization Management: Not Supported 00:25:06.853 Doorbell Buffer Config: Not Supported 00:25:06.853 Get LBA Status Capability: Not Supported 00:25:06.853 Command & Feature Lockdown Capability: Not Supported 00:25:06.853 Abort Command Limit: 1 00:25:06.853 Async Event Request Limit: 4 00:25:06.853 Number of Firmware Slots: N/A 00:25:06.853 Firmware Slot 1 Read-Only: N/A 00:25:06.853 Firmware Activation Without Reset: N/A 00:25:06.853 Multiple Update Detection Support: N/A 00:25:06.853 Firmware Update Granularity: No Information Provided 00:25:06.853 Per-Namespace SMART Log: No 00:25:06.853 Asymmetric Namespace Access Log Page: Not Supported 00:25:06.853 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:06.853 Command Effects Log Page: Not Supported 00:25:06.853 Get Log Page Extended Data: Supported 00:25:06.853 Telemetry Log Pages: Not Supported 00:25:06.853 Persistent Event Log Pages: Not Supported 00:25:06.853 Supported Log Pages Log Page: May Support 00:25:06.853 Commands Supported & Effects Log Page: Not Supported 00:25:06.853 Feature Identifiers & Effects Log Page:May Support 00:25:06.853 NVMe-MI Commands & Effects Log Page: May Support 00:25:06.853 Data Area 4 for Telemetry Log: Not Supported 00:25:06.853 Error Log Page Entries Supported: 128 00:25:06.853 Keep Alive: Not Supported 00:25:06.853 00:25:06.853 NVM Command Set Attributes 00:25:06.853 ========================== 00:25:06.853 Submission Queue Entry Size 00:25:06.853 Max: 1 00:25:06.853 Min: 1 00:25:06.853 Completion Queue Entry Size 00:25:06.853 Max: 1 00:25:06.853 Min: 1 00:25:06.853 Number of Namespaces: 0 00:25:06.853 Compare Command: Not Supported 00:25:06.853 Write Uncorrectable Command: Not Supported 00:25:06.853 Dataset Management Command: Not Supported 00:25:06.853 Write Zeroes Command: Not Supported 00:25:06.853 Set Features Save Field: Not Supported 00:25:06.853 Reservations: Not Supported 00:25:06.853 Timestamp: Not Supported 00:25:06.853 Copy: Not Supported 00:25:06.853 Volatile Write Cache: Not Present 00:25:06.853 Atomic Write Unit (Normal): 1 00:25:06.853 Atomic Write Unit (PFail): 1 00:25:06.853 Atomic Compare & Write Unit: 1 00:25:06.853 Fused Compare & Write: Supported 00:25:06.853 Scatter-Gather List 00:25:06.853 SGL Command Set: Supported 00:25:06.853 SGL Keyed: Supported 00:25:06.853 SGL Bit Bucket Descriptor: Not Supported 00:25:06.853 SGL Metadata Pointer: Not Supported 00:25:06.854 Oversized SGL: Not Supported 00:25:06.854 SGL Metadata Address: Not Supported 00:25:06.854 SGL Offset: Supported 00:25:06.854 Transport SGL Data Block: Not Supported 00:25:06.854 Replay Protected Memory Block: Not Supported 00:25:06.854 00:25:06.854 Firmware Slot Information 00:25:06.854 ========================= 00:25:06.854 Active slot: 0 00:25:06.854 00:25:06.854 00:25:06.854 Error Log 00:25:06.854 ========= 00:25:06.854 00:25:06.854 Active Namespaces 00:25:06.854 ================= 00:25:06.854 Discovery Log Page 00:25:06.854 ================== 00:25:06.854 Generation Counter: 2 00:25:06.854 Number of Records: 2 00:25:06.854 Record Format: 0 00:25:06.854 00:25:06.854 Discovery Log Entry 0 00:25:06.854 ---------------------- 00:25:06.854 Transport Type: 1 (RDMA) 00:25:06.854 Address Family: 1 (IPv4) 00:25:06.854 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:06.854 Entry Flags: 00:25:06.854 Duplicate Returned Information: 1 00:25:06.854 Explicit Persistent Connection Support for Discovery: 1 00:25:06.854 Transport Requirements: 00:25:06.854 Secure Channel: Not Required 00:25:06.854 Port ID: 0 (0x0000) 00:25:06.854 Controller ID: 65535 (0xffff) 00:25:06.854 Admin Max SQ Size: 128 00:25:06.854 Transport Service Identifier: 4420 00:25:06.854 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:06.854 Transport Address: 192.168.100.8 00:25:06.854 Transport Specific Address Subtype - RDMA 00:25:06.854 RDMA QP Service Type: 1 (Reliable Connected) 00:25:06.854 RDMA Provider Type: 1 (No provider specified) 00:25:06.854 RDMA CM Service: 1 (RDMA_CM) 00:25:06.854 Discovery Log Entry 1 00:25:06.854 ---------------------- 00:25:06.854 Transport Type: 1 (RDMA) 00:25:06.854 Address Family: 1 (IPv4) 00:25:06.854 Subsystem Type: 2 (NVM Subsystem) 00:25:06.854 Entry Flags: 00:25:06.854 Duplicate Returned Information: 0 00:25:06.854 Explicit Persistent Connection Support for Discovery: 0 00:25:06.854 Transport Requirements: 00:25:06.854 Secure Channel: Not Required 00:25:06.854 Port ID: 0 (0x0000) 00:25:06.854 Controller ID: 65535 (0xffff) 00:25:06.854 Admin Max SQ Size: [2024-07-21 11:48:36.073792] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:06.854 [2024-07-21 11:48:36.073803] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 46385 doesn't match qid 00:25:06.854 [2024-07-21 11:48:36.073817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32533 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.073824] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 46385 doesn't match qid 00:25:06.854 [2024-07-21 11:48:36.073833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32533 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.073839] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 46385 doesn't match qid 00:25:06.854 [2024-07-21 11:48:36.073847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32533 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.073854] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 46385 doesn't match qid 00:25:06.854 [2024-07-21 11:48:36.073861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32533 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.073870] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.073878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.073901] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.073907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.073915] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.073923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.073929] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.073948] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.073954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.073960] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:06.854 [2024-07-21 11:48:36.073966] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:06.854 [2024-07-21 11:48:36.073973] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.073981] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.073989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074009] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074022] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074031] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074053] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074067] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074076] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074102] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074115] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074123] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074152] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074164] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074173] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074197] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074209] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074218] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074244] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074257] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074265] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074294] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074306] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074314] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074344] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074358] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074367] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074395] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074407] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074416] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074444] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074456] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074465] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074490] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074502] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074511] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074537] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074549] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074558] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074586] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074599] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074607] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074635] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074649] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074658] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074684] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074696] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074705] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074731] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074743] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074752] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074784] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074796] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074805] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074834] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074847] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074855] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074885] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074897] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074906] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074931] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074943] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074952] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.074974] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.074980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.074986] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.074995] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.075003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.075021] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.075027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.075033] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.075042] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.854 [2024-07-21 11:48:36.075050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.854 [2024-07-21 11:48:36.075068] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.854 [2024-07-21 11:48:36.075073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:06.854 [2024-07-21 11:48:36.075080] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075089] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075114] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075126] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075135] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075159] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075171] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075180] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075208] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075220] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075229] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075252] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075265] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075273] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075305] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075317] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075326] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075355] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075367] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075376] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075402] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075414] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075423] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075445] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075457] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075466] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075493] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075505] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075514] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075540] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075553] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075561] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075584] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075596] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075604] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075631] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075645] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075654] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075678] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075690] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075699] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075724] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075736] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075745] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075769] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075782] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075790] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075814] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075827] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075835] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075859] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075871] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075880] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075906] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075918] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075927] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.075959] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.075964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.075971] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075980] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.075987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.076002] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.076007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.076014] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.076024] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.076032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.076048] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.076054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.076060] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.076069] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.076077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.076099] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.076104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.076111] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.076120] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.076127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.076151] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.076157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.076163] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.076172] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.076180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.076200] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.076205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.076212] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.076221] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.076228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.076252] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.076258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.076264] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.076273] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.076281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.076305] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.076311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.076317] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.076327] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.076335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.076355] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.076361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.076368] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.076376] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.076384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.855 [2024-07-21 11:48:36.076400] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.855 [2024-07-21 11:48:36.076406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:06.855 [2024-07-21 11:48:36.076412] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:25:06.855 [2024-07-21 11:48:36.076421] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.076429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.856 [2024-07-21 11:48:36.076447] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.856 [2024-07-21 11:48:36.076452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:06.856 [2024-07-21 11:48:36.076459] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.076468] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.076475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.856 [2024-07-21 11:48:36.076492] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.856 [2024-07-21 11:48:36.076497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:06.856 [2024-07-21 11:48:36.076504] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.076513] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.076520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.856 [2024-07-21 11:48:36.076536] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.856 [2024-07-21 11:48:36.076542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:06.856 [2024-07-21 11:48:36.076549] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.076557] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.076565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.856 [2024-07-21 11:48:36.076585] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.856 [2024-07-21 11:48:36.076591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:06.856 [2024-07-21 11:48:36.076598] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.076607] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.076615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.856 [2024-07-21 11:48:36.080632] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.856 [2024-07-21 11:48:36.080640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:06.856 [2024-07-21 11:48:36.080646] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.080655] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.080663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.856 [2024-07-21 11:48:36.080677] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.856 [2024-07-21 11:48:36.080683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000a p:0 m:0 dnr:0 00:25:06.856 [2024-07-21 11:48:36.080690] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.080697] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:25:06.856 128 00:25:06.856 Transport Service Identifier: 4420 00:25:06.856 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:06.856 Transport Address: 192.168.100.8 00:25:06.856 Transport Specific Address Subtype - RDMA 00:25:06.856 RDMA QP Service Type: 1 (Reliable Connected) 00:25:06.856 RDMA Provider Type: 1 (No provider specified) 00:25:06.856 RDMA CM Service: 1 (RDMA_CM) 00:25:06.856 11:48:36 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:06.856 [2024-07-21 11:48:36.151050] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:06.856 [2024-07-21 11:48:36.151088] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466501 ] 00:25:06.856 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.856 [2024-07-21 11:48:36.196779] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:06.856 [2024-07-21 11:48:36.196846] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:25:06.856 [2024-07-21 11:48:36.196860] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:25:06.856 [2024-07-21 11:48:36.196865] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:25:06.856 [2024-07-21 11:48:36.196890] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:06.856 [2024-07-21 11:48:36.206025] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:25:06.856 [2024-07-21 11:48:36.216088] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:06.856 [2024-07-21 11:48:36.216099] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:25:06.856 [2024-07-21 11:48:36.216106] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216116] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216122] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216128] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216135] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216141] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216147] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216154] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216160] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216166] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216172] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216179] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216185] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216191] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216198] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216204] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216210] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216216] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216223] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216229] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216235] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216241] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216248] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216254] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216260] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216267] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216273] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216279] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216285] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216292] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216298] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216304] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:25:06.856 [2024-07-21 11:48:36.216310] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:06.856 [2024-07-21 11:48:36.216315] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:25:06.856 [2024-07-21 11:48:36.216329] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.216340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x183d00 00:25:06.856 [2024-07-21 11:48:36.221632] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.856 [2024-07-21 11:48:36.221641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:06.856 [2024-07-21 11:48:36.221649] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.221656] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:06.856 [2024-07-21 11:48:36.221662] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:06.856 [2024-07-21 11:48:36.221669] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:06.856 [2024-07-21 11:48:36.221680] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.221688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.856 [2024-07-21 11:48:36.221707] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.856 [2024-07-21 11:48:36.221713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:25:06.856 [2024-07-21 11:48:36.221719] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:06.856 [2024-07-21 11:48:36.221725] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.221732] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:06.856 [2024-07-21 11:48:36.221740] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.221748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.856 [2024-07-21 11:48:36.221768] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.856 [2024-07-21 11:48:36.221773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:25:06.856 [2024-07-21 11:48:36.221780] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:06.856 [2024-07-21 11:48:36.221786] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.221793] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:06.856 [2024-07-21 11:48:36.221801] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.221808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.856 [2024-07-21 11:48:36.221822] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.856 [2024-07-21 11:48:36.221828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.856 [2024-07-21 11:48:36.221835] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:06.856 [2024-07-21 11:48:36.221841] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.221851] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.221859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.856 [2024-07-21 11:48:36.221879] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.856 [2024-07-21 11:48:36.221884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.856 [2024-07-21 11:48:36.221890] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:06.856 [2024-07-21 11:48:36.221896] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:06.856 [2024-07-21 11:48:36.221902] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.221909] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:06.856 [2024-07-21 11:48:36.222016] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:06.856 [2024-07-21 11:48:36.222021] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:06.856 [2024-07-21 11:48:36.222029] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.222037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.856 [2024-07-21 11:48:36.222057] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.856 [2024-07-21 11:48:36.222063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.856 [2024-07-21 11:48:36.222069] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:06.856 [2024-07-21 11:48:36.222075] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.222083] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.222091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.856 [2024-07-21 11:48:36.222105] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.856 [2024-07-21 11:48:36.222111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:06.856 [2024-07-21 11:48:36.222117] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:06.856 [2024-07-21 11:48:36.222124] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:06.856 [2024-07-21 11:48:36.222130] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.222137] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:06.856 [2024-07-21 11:48:36.222145] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:06.856 [2024-07-21 11:48:36.222154] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.222162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:25:06.856 [2024-07-21 11:48:36.222199] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.856 [2024-07-21 11:48:36.222205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.856 [2024-07-21 11:48:36.222213] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:06.856 [2024-07-21 11:48:36.222219] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:06.856 [2024-07-21 11:48:36.222225] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:06.856 [2024-07-21 11:48:36.222230] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:06.856 [2024-07-21 11:48:36.222236] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:06.856 [2024-07-21 11:48:36.222242] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:06.856 [2024-07-21 11:48:36.222248] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.222258] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:06.856 [2024-07-21 11:48:36.222265] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.222273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.856 [2024-07-21 11:48:36.222287] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.856 [2024-07-21 11:48:36.222293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.856 [2024-07-21 11:48:36.222301] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.222308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.856 [2024-07-21 11:48:36.222315] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.222322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.856 [2024-07-21 11:48:36.222329] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.222336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.856 [2024-07-21 11:48:36.222343] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.222350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.856 [2024-07-21 11:48:36.222356] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:06.856 [2024-07-21 11:48:36.222362] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.222372] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:06.856 [2024-07-21 11:48:36.222379] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.856 [2024-07-21 11:48:36.222387] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.856 [2024-07-21 11:48:36.222408] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.856 [2024-07-21 11:48:36.222414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:25:06.857 [2024-07-21 11:48:36.222420] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:06.857 [2024-07-21 11:48:36.222427] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:06.857 [2024-07-21 11:48:36.222433] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.222440] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:06.857 [2024-07-21 11:48:36.222449] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:06.857 [2024-07-21 11:48:36.222456] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.222464] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.857 [2024-07-21 11:48:36.222480] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.857 [2024-07-21 11:48:36.222486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:25:06.857 [2024-07-21 11:48:36.222534] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:06.857 [2024-07-21 11:48:36.222540] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.222548] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:06.857 [2024-07-21 11:48:36.222557] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.222564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183d00 00:25:06.857 [2024-07-21 11:48:36.222586] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.857 [2024-07-21 11:48:36.222592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.857 [2024-07-21 11:48:36.222605] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:06.857 [2024-07-21 11:48:36.222615] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:06.857 [2024-07-21 11:48:36.222621] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.222633] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:06.857 [2024-07-21 11:48:36.222641] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.222649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:25:06.857 [2024-07-21 11:48:36.222677] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.857 [2024-07-21 11:48:36.222683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.857 [2024-07-21 11:48:36.222695] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:06.857 [2024-07-21 11:48:36.222703] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.222711] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:06.857 [2024-07-21 11:48:36.222719] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.222726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:25:06.857 [2024-07-21 11:48:36.222747] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.857 [2024-07-21 11:48:36.222753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.857 [2024-07-21 11:48:36.222761] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:06.857 [2024-07-21 11:48:36.222767] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.222774] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:06.857 [2024-07-21 11:48:36.222783] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:06.857 [2024-07-21 11:48:36.222790] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:06.857 [2024-07-21 11:48:36.222797] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:06.857 [2024-07-21 11:48:36.222803] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:06.857 [2024-07-21 11:48:36.222809] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:06.857 [2024-07-21 11:48:36.222815] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:06.857 [2024-07-21 11:48:36.222830] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.222837] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.857 [2024-07-21 11:48:36.222845] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.222852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.857 [2024-07-21 11:48:36.222862] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.857 [2024-07-21 11:48:36.222868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.857 [2024-07-21 11:48:36.222875] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.222881] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.857 [2024-07-21 11:48:36.222887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.857 [2024-07-21 11:48:36.222893] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.222902] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.222910] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.857 [2024-07-21 11:48:36.222931] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.857 [2024-07-21 11:48:36.222937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.857 [2024-07-21 11:48:36.222944] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.222953] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.222960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.857 [2024-07-21 11:48:36.222986] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.857 [2024-07-21 11:48:36.222992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.857 [2024-07-21 11:48:36.222998] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.223008] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.223015] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.857 [2024-07-21 11:48:36.223032] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.857 [2024-07-21 11:48:36.223038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:25:06.857 [2024-07-21 11:48:36.223044] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.223055] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.223063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183d00 00:25:06.857 [2024-07-21 11:48:36.223071] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.223079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x183d00 00:25:06.857 [2024-07-21 11:48:36.223087] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.223095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183d00 00:25:06.857 [2024-07-21 11:48:36.223103] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.223111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x183d00 00:25:06.857 [2024-07-21 11:48:36.223120] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.857 [2024-07-21 11:48:36.223125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.857 [2024-07-21 11:48:36.223138] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.223144] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.857 [2024-07-21 11:48:36.223149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.857 [2024-07-21 11:48:36.223159] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.223165] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.857 [2024-07-21 11:48:36.223172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:06.857 [2024-07-21 11:48:36.223179] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:25:06.857 [2024-07-21 11:48:36.223185] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.857 [2024-07-21 11:48:36.223191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:06.857 [2024-07-21 11:48:36.223201] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:25:06.857 ===================================================== 00:25:06.857 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:06.857 ===================================================== 00:25:06.857 Controller Capabilities/Features 00:25:06.857 ================================ 00:25:06.857 Vendor ID: 8086 00:25:06.857 Subsystem Vendor ID: 8086 00:25:06.857 Serial Number: SPDK00000000000001 00:25:06.857 Model Number: SPDK bdev Controller 00:25:06.857 Firmware Version: 24.01.1 00:25:06.857 Recommended Arb Burst: 6 00:25:06.857 IEEE OUI Identifier: e4 d2 5c 00:25:06.857 Multi-path I/O 00:25:06.857 May have multiple subsystem ports: Yes 00:25:06.857 May have multiple controllers: Yes 00:25:06.857 Associated with SR-IOV VF: No 00:25:06.857 Max Data Transfer Size: 131072 00:25:06.857 Max Number of Namespaces: 32 00:25:06.857 Max Number of I/O Queues: 127 00:25:06.857 NVMe Specification Version (VS): 1.3 00:25:06.857 NVMe Specification Version (Identify): 1.3 00:25:06.857 Maximum Queue Entries: 128 00:25:06.857 Contiguous Queues Required: Yes 00:25:06.857 Arbitration Mechanisms Supported 00:25:06.857 Weighted Round Robin: Not Supported 00:25:06.857 Vendor Specific: Not Supported 00:25:06.857 Reset Timeout: 15000 ms 00:25:06.857 Doorbell Stride: 4 bytes 00:25:06.857 NVM Subsystem Reset: Not Supported 00:25:06.857 Command Sets Supported 00:25:06.857 NVM Command Set: Supported 00:25:06.857 Boot Partition: Not Supported 00:25:06.857 Memory Page Size Minimum: 4096 bytes 00:25:06.857 Memory Page Size Maximum: 4096 bytes 00:25:06.857 Persistent Memory Region: Not Supported 00:25:06.857 Optional Asynchronous Events Supported 00:25:06.857 Namespace Attribute Notices: Supported 00:25:06.857 Firmware Activation Notices: Not Supported 00:25:06.857 ANA Change Notices: Not Supported 00:25:06.857 PLE Aggregate Log Change Notices: Not Supported 00:25:06.857 LBA Status Info Alert Notices: Not Supported 00:25:06.857 EGE Aggregate Log Change Notices: Not Supported 00:25:06.857 Normal NVM Subsystem Shutdown event: Not Supported 00:25:06.857 Zone Descriptor Change Notices: Not Supported 00:25:06.857 Discovery Log Change Notices: Not Supported 00:25:06.857 Controller Attributes 00:25:06.857 128-bit Host Identifier: Supported 00:25:06.857 Non-Operational Permissive Mode: Not Supported 00:25:06.857 NVM Sets: Not Supported 00:25:06.857 Read Recovery Levels: Not Supported 00:25:06.857 Endurance Groups: Not Supported 00:25:06.857 Predictable Latency Mode: Not Supported 00:25:06.857 Traffic Based Keep ALive: Not Supported 00:25:06.857 Namespace Granularity: Not Supported 00:25:06.857 SQ Associations: Not Supported 00:25:06.857 UUID List: Not Supported 00:25:06.857 Multi-Domain Subsystem: Not Supported 00:25:06.857 Fixed Capacity Management: Not Supported 00:25:06.857 Variable Capacity Management: Not Supported 00:25:06.857 Delete Endurance Group: Not Supported 00:25:06.857 Delete NVM Set: Not Supported 00:25:06.857 Extended LBA Formats Supported: Not Supported 00:25:06.857 Flexible Data Placement Supported: Not Supported 00:25:06.857 00:25:06.857 Controller Memory Buffer Support 00:25:06.857 ================================ 00:25:06.857 Supported: No 00:25:06.857 00:25:06.857 Persistent Memory Region Support 00:25:06.857 ================================ 00:25:06.857 Supported: No 00:25:06.857 00:25:06.857 Admin Command Set Attributes 00:25:06.857 ============================ 00:25:06.857 Security Send/Receive: Not Supported 00:25:06.857 Format NVM: Not Supported 00:25:06.857 Firmware Activate/Download: Not Supported 00:25:06.857 Namespace Management: Not Supported 00:25:06.857 Device Self-Test: Not Supported 00:25:06.857 Directives: Not Supported 00:25:06.857 NVMe-MI: Not Supported 00:25:06.857 Virtualization Management: Not Supported 00:25:06.857 Doorbell Buffer Config: Not Supported 00:25:06.857 Get LBA Status Capability: Not Supported 00:25:06.857 Command & Feature Lockdown Capability: Not Supported 00:25:06.857 Abort Command Limit: 4 00:25:06.857 Async Event Request Limit: 4 00:25:06.857 Number of Firmware Slots: N/A 00:25:06.857 Firmware Slot 1 Read-Only: N/A 00:25:06.857 Firmware Activation Without Reset: N/A 00:25:06.857 Multiple Update Detection Support: N/A 00:25:06.857 Firmware Update Granularity: No Information Provided 00:25:06.857 Per-Namespace SMART Log: No 00:25:06.857 Asymmetric Namespace Access Log Page: Not Supported 00:25:06.857 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:06.857 Command Effects Log Page: Supported 00:25:06.857 Get Log Page Extended Data: Supported 00:25:06.857 Telemetry Log Pages: Not Supported 00:25:06.857 Persistent Event Log Pages: Not Supported 00:25:06.857 Supported Log Pages Log Page: May Support 00:25:06.857 Commands Supported & Effects Log Page: Not Supported 00:25:06.857 Feature Identifiers & Effects Log Page:May Support 00:25:06.857 NVMe-MI Commands & Effects Log Page: May Support 00:25:06.857 Data Area 4 for Telemetry Log: Not Supported 00:25:06.857 Error Log Page Entries Supported: 128 00:25:06.857 Keep Alive: Supported 00:25:06.857 Keep Alive Granularity: 10000 ms 00:25:06.857 00:25:06.857 NVM Command Set Attributes 00:25:06.857 ========================== 00:25:06.857 Submission Queue Entry Size 00:25:06.857 Max: 64 00:25:06.857 Min: 64 00:25:06.858 Completion Queue Entry Size 00:25:06.858 Max: 16 00:25:06.858 Min: 16 00:25:06.858 Number of Namespaces: 32 00:25:06.858 Compare Command: Supported 00:25:06.858 Write Uncorrectable Command: Not Supported 00:25:06.858 Dataset Management Command: Supported 00:25:06.858 Write Zeroes Command: Supported 00:25:06.858 Set Features Save Field: Not Supported 00:25:06.858 Reservations: Supported 00:25:06.858 Timestamp: Not Supported 00:25:06.858 Copy: Supported 00:25:06.858 Volatile Write Cache: Present 00:25:06.858 Atomic Write Unit (Normal): 1 00:25:06.858 Atomic Write Unit (PFail): 1 00:25:06.858 Atomic Compare & Write Unit: 1 00:25:06.858 Fused Compare & Write: Supported 00:25:06.858 Scatter-Gather List 00:25:06.858 SGL Command Set: Supported 00:25:06.858 SGL Keyed: Supported 00:25:06.858 SGL Bit Bucket Descriptor: Not Supported 00:25:06.858 SGL Metadata Pointer: Not Supported 00:25:06.858 Oversized SGL: Not Supported 00:25:06.858 SGL Metadata Address: Not Supported 00:25:06.858 SGL Offset: Supported 00:25:06.858 Transport SGL Data Block: Not Supported 00:25:06.858 Replay Protected Memory Block: Not Supported 00:25:06.858 00:25:06.858 Firmware Slot Information 00:25:06.858 ========================= 00:25:06.858 Active slot: 1 00:25:06.858 Slot 1 Firmware Revision: 24.01.1 00:25:06.858 00:25:06.858 00:25:06.858 Commands Supported and Effects 00:25:06.858 ============================== 00:25:06.858 Admin Commands 00:25:06.858 -------------- 00:25:06.858 Get Log Page (02h): Supported 00:25:06.858 Identify (06h): Supported 00:25:06.858 Abort (08h): Supported 00:25:06.858 Set Features (09h): Supported 00:25:06.858 Get Features (0Ah): Supported 00:25:06.858 Asynchronous Event Request (0Ch): Supported 00:25:06.858 Keep Alive (18h): Supported 00:25:06.858 I/O Commands 00:25:06.858 ------------ 00:25:06.858 Flush (00h): Supported LBA-Change 00:25:06.858 Write (01h): Supported LBA-Change 00:25:06.858 Read (02h): Supported 00:25:06.858 Compare (05h): Supported 00:25:06.858 Write Zeroes (08h): Supported LBA-Change 00:25:06.858 Dataset Management (09h): Supported LBA-Change 00:25:06.858 Copy (19h): Supported LBA-Change 00:25:06.858 Unknown (79h): Supported LBA-Change 00:25:06.858 Unknown (7Ah): Supported 00:25:06.858 00:25:06.858 Error Log 00:25:06.858 ========= 00:25:06.858 00:25:06.858 Arbitration 00:25:06.858 =========== 00:25:06.858 Arbitration Burst: 1 00:25:06.858 00:25:06.858 Power Management 00:25:06.858 ================ 00:25:06.858 Number of Power States: 1 00:25:06.858 Current Power State: Power State #0 00:25:06.858 Power State #0: 00:25:06.858 Max Power: 0.00 W 00:25:06.858 Non-Operational State: Operational 00:25:06.858 Entry Latency: Not Reported 00:25:06.858 Exit Latency: Not Reported 00:25:06.858 Relative Read Throughput: 0 00:25:06.858 Relative Read Latency: 0 00:25:06.858 Relative Write Throughput: 0 00:25:06.858 Relative Write Latency: 0 00:25:06.858 Idle Power: Not Reported 00:25:06.858 Active Power: Not Reported 00:25:06.858 Non-Operational Permissive Mode: Not Supported 00:25:06.858 00:25:06.858 Health Information 00:25:06.858 ================== 00:25:06.858 Critical Warnings: 00:25:06.858 Available Spare Space: OK 00:25:06.858 Temperature: OK 00:25:06.858 Device Reliability: OK 00:25:06.858 Read Only: No 00:25:06.858 Volatile Memory Backup: OK 00:25:06.858 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:06.858 Temperature Threshol[2024-07-21 11:48:36.223282] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.223308] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.223314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.223321] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223344] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:06.858 [2024-07-21 11:48:36.223353] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 13333 doesn't match qid 00:25:06.858 [2024-07-21 11:48:36.223367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32660 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.223374] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 13333 doesn't match qid 00:25:06.858 [2024-07-21 11:48:36.223382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32660 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.223389] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 13333 doesn't match qid 00:25:06.858 [2024-07-21 11:48:36.223397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32660 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.223404] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 13333 doesn't match qid 00:25:06.858 [2024-07-21 11:48:36.223412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32660 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.223420] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.223443] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.223448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.223456] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.223470] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223487] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.223493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.223499] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:06.858 [2024-07-21 11:48:36.223505] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:06.858 [2024-07-21 11:48:36.223513] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223522] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.223547] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.223553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.223560] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223569] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.223593] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.223599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.223605] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223614] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.223642] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.223648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.223654] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223663] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.223693] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.223699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.223705] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223714] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.223742] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.223748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.223755] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223764] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.223790] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.223799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.223805] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223814] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.223840] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.223846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.223853] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223861] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.223891] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.223897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.223904] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223912] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.223941] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.223946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.223953] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223961] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.223969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.223985] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.223991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.223997] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224006] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.224036] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.224041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.224048] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224056] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.224085] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.224091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.224098] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224106] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.224136] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.224142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.224148] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224157] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.224185] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.224190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.224197] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224205] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.224233] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.224239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.224245] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224254] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.224282] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.224287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.224294] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224302] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.224326] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.224332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.224338] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224347] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.224376] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.224382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.224388] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224397] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.224421] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.858 [2024-07-21 11:48:36.224426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:25:06.858 [2024-07-21 11:48:36.224433] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224442] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.858 [2024-07-21 11:48:36.224449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.858 [2024-07-21 11:48:36.224467] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.224473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.224479] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224488] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.224520] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.224525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.224532] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224540] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.224572] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.224578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.224584] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224593] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.224615] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.224620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.224631] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224640] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.224665] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.224671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.224677] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224686] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.224708] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.224714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.224720] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224729] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.224758] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.224764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.224770] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224779] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.224803] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.224808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.224815] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224823] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.224857] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.224863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.224869] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224878] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.224902] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.224907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.224914] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224922] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.224946] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.224951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.224958] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224966] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.224974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.224990] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.224996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.225002] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225011] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.225041] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.225046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.225053] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225062] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.225084] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.225089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.225096] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225105] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.225130] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.225136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.225142] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225151] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.225177] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.225183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.225189] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225198] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.225227] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.225232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.225239] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225248] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.225273] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.225279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.225285] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225294] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.225318] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.225324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.225330] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225339] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.225363] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.225368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.225375] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225383] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.225409] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.225415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.225421] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225430] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.225458] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.225463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.225470] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225480] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.225507] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.225513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.225520] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225528] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.225558] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.225564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.225570] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225579] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.225601] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.225606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.225613] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.225621] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.229638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:06.859 [2024-07-21 11:48:36.229659] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:06.859 [2024-07-21 11:48:36.229664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0007 p:0 m:0 dnr:0 00:25:06.859 [2024-07-21 11:48:36.229671] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:25:06.859 [2024-07-21 11:48:36.229678] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:25:06.859 d: 0 Kelvin (-273 Celsius) 00:25:06.859 Available Spare: 0% 00:25:06.859 Available Spare Threshold: 0% 00:25:06.859 Life Percentage Used: 0% 00:25:06.859 Data Units Read: 0 00:25:06.859 Data Units Written: 0 00:25:06.859 Host Read Commands: 0 00:25:06.859 Host Write Commands: 0 00:25:06.859 Controller Busy Time: 0 minutes 00:25:06.859 Power Cycles: 0 00:25:06.859 Power On Hours: 0 hours 00:25:06.859 Unsafe Shutdowns: 0 00:25:06.859 Unrecoverable Media Errors: 0 00:25:06.859 Lifetime Error Log Entries: 0 00:25:06.859 Warning Temperature Time: 0 minutes 00:25:06.859 Critical Temperature Time: 0 minutes 00:25:06.859 00:25:06.859 Number of Queues 00:25:06.859 ================ 00:25:06.859 Number of I/O Submission Queues: 127 00:25:06.859 Number of I/O Completion Queues: 127 00:25:06.859 00:25:06.859 Active Namespaces 00:25:06.859 ================= 00:25:06.859 Namespace ID:1 00:25:06.859 Error Recovery Timeout: Unlimited 00:25:06.859 Command Set Identifier: NVM (00h) 00:25:06.859 Deallocate: Supported 00:25:06.859 Deallocated/Unwritten Error: Not Supported 00:25:06.859 Deallocated Read Value: Unknown 00:25:06.859 Deallocate in Write Zeroes: Not Supported 00:25:06.859 Deallocated Guard Field: 0xFFFF 00:25:06.859 Flush: Supported 00:25:06.859 Reservation: Supported 00:25:06.859 Namespace Sharing Capabilities: Multiple Controllers 00:25:06.859 Size (in LBAs): 131072 (0GiB) 00:25:06.859 Capacity (in LBAs): 131072 (0GiB) 00:25:06.859 Utilization (in LBAs): 131072 (0GiB) 00:25:06.859 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:06.859 EUI64: ABCDEF0123456789 00:25:06.859 UUID: c3e612a3-62ca-4aa1-ae51-95e23a451650 00:25:06.859 Thin Provisioning: Not Supported 00:25:06.859 Per-NS Atomic Units: Yes 00:25:06.859 Atomic Boundary Size (Normal): 0 00:25:06.859 Atomic Boundary Size (PFail): 0 00:25:06.859 Atomic Boundary Offset: 0 00:25:06.859 Maximum Single Source Range Length: 65535 00:25:06.859 Maximum Copy Length: 65535 00:25:06.859 Maximum Source Range Count: 1 00:25:06.859 NGUID/EUI64 Never Reused: No 00:25:06.859 Namespace Write Protected: No 00:25:06.859 Number of LBA Formats: 1 00:25:06.859 Current LBA Format: LBA Format #00 00:25:06.859 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:06.859 00:25:07.117 11:48:36 -- host/identify.sh@51 -- # sync 00:25:07.117 11:48:36 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:07.117 11:48:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.117 11:48:36 -- common/autotest_common.sh@10 -- # set +x 00:25:07.117 11:48:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.117 11:48:36 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:07.117 11:48:36 -- host/identify.sh@56 -- # nvmftestfini 00:25:07.117 11:48:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:07.117 11:48:36 -- nvmf/common.sh@116 -- # sync 00:25:07.117 11:48:36 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:25:07.117 11:48:36 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:25:07.117 11:48:36 -- nvmf/common.sh@119 -- # set +e 00:25:07.117 11:48:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:07.117 11:48:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:25:07.117 rmmod nvme_rdma 00:25:07.117 rmmod nvme_fabrics 00:25:07.117 11:48:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:07.117 11:48:36 -- nvmf/common.sh@123 -- # set -e 00:25:07.117 11:48:36 -- nvmf/common.sh@124 -- # return 0 00:25:07.117 11:48:36 -- nvmf/common.sh@477 -- # '[' -n 2466350 ']' 00:25:07.117 11:48:36 -- nvmf/common.sh@478 -- # killprocess 2466350 00:25:07.117 11:48:36 -- common/autotest_common.sh@926 -- # '[' -z 2466350 ']' 00:25:07.117 11:48:36 -- common/autotest_common.sh@930 -- # kill -0 2466350 00:25:07.117 11:48:36 -- common/autotest_common.sh@931 -- # uname 00:25:07.117 11:48:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:07.117 11:48:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2466350 00:25:07.117 11:48:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:07.117 11:48:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:07.117 11:48:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2466350' 00:25:07.117 killing process with pid 2466350 00:25:07.117 11:48:36 -- common/autotest_common.sh@945 -- # kill 2466350 00:25:07.117 [2024-07-21 11:48:36.405634] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:07.117 11:48:36 -- common/autotest_common.sh@950 -- # wait 2466350 00:25:07.374 11:48:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:07.374 11:48:36 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:25:07.374 00:25:07.374 real 0m10.250s 00:25:07.374 user 0m8.787s 00:25:07.374 sys 0m6.861s 00:25:07.374 11:48:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:07.374 11:48:36 -- common/autotest_common.sh@10 -- # set +x 00:25:07.374 ************************************ 00:25:07.374 END TEST nvmf_identify 00:25:07.374 ************************************ 00:25:07.374 11:48:36 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:25:07.374 11:48:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:07.374 11:48:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:07.374 11:48:36 -- common/autotest_common.sh@10 -- # set +x 00:25:07.374 ************************************ 00:25:07.374 START TEST nvmf_perf 00:25:07.374 ************************************ 00:25:07.374 11:48:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:25:07.632 * Looking for test storage... 00:25:07.632 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:07.632 11:48:36 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.632 11:48:36 -- nvmf/common.sh@7 -- # uname -s 00:25:07.632 11:48:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.632 11:48:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.632 11:48:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.632 11:48:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.632 11:48:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.632 11:48:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.632 11:48:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.632 11:48:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.632 11:48:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.632 11:48:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.632 11:48:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:07.632 11:48:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:07.632 11:48:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.632 11:48:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.632 11:48:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.632 11:48:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:07.632 11:48:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.632 11:48:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.632 11:48:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.632 11:48:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.632 11:48:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.632 11:48:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.632 11:48:36 -- paths/export.sh@5 -- # export PATH 00:25:07.632 11:48:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.632 11:48:36 -- nvmf/common.sh@46 -- # : 0 00:25:07.632 11:48:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:07.632 11:48:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:07.632 11:48:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:07.632 11:48:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.632 11:48:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.632 11:48:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:07.632 11:48:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:07.632 11:48:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:07.632 11:48:36 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:07.632 11:48:36 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:07.632 11:48:36 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:25:07.632 11:48:36 -- host/perf.sh@17 -- # nvmftestinit 00:25:07.632 11:48:36 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:25:07.632 11:48:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.632 11:48:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:07.632 11:48:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:07.632 11:48:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:07.632 11:48:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.632 11:48:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:07.632 11:48:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.632 11:48:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:07.632 11:48:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:07.632 11:48:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:07.632 11:48:36 -- common/autotest_common.sh@10 -- # set +x 00:25:15.736 11:48:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:15.736 11:48:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:15.736 11:48:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:15.736 11:48:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:15.736 11:48:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:15.736 11:48:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:15.736 11:48:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:15.736 11:48:44 -- nvmf/common.sh@294 -- # net_devs=() 00:25:15.737 11:48:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:15.737 11:48:44 -- nvmf/common.sh@295 -- # e810=() 00:25:15.737 11:48:44 -- nvmf/common.sh@295 -- # local -ga e810 00:25:15.737 11:48:44 -- nvmf/common.sh@296 -- # x722=() 00:25:15.737 11:48:44 -- nvmf/common.sh@296 -- # local -ga x722 00:25:15.737 11:48:44 -- nvmf/common.sh@297 -- # mlx=() 00:25:15.737 11:48:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:15.737 11:48:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.737 11:48:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.737 11:48:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.737 11:48:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.737 11:48:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.737 11:48:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.737 11:48:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.737 11:48:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.737 11:48:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.737 11:48:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.737 11:48:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.737 11:48:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:15.737 11:48:44 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:25:15.737 11:48:44 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:25:15.737 11:48:44 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:25:15.737 11:48:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:15.737 11:48:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:15.737 11:48:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:15.737 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:15.737 11:48:44 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:15.737 11:48:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:15.737 11:48:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:15.737 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:15.737 11:48:44 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:15.737 11:48:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:15.737 11:48:44 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:15.737 11:48:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.737 11:48:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:15.737 11:48:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.737 11:48:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:15.737 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:15.737 11:48:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.737 11:48:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:15.737 11:48:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.737 11:48:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:15.737 11:48:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.737 11:48:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:15.737 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:15.737 11:48:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.737 11:48:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:15.737 11:48:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:15.737 11:48:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@408 -- # rdma_device_init 00:25:15.737 11:48:44 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:25:15.737 11:48:44 -- nvmf/common.sh@57 -- # uname 00:25:15.737 11:48:44 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:25:15.737 11:48:44 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:25:15.737 11:48:44 -- nvmf/common.sh@62 -- # modprobe ib_core 00:25:15.737 11:48:44 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:25:15.737 11:48:44 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:25:15.737 11:48:44 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:25:15.737 11:48:44 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:25:15.737 11:48:44 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:25:15.737 11:48:44 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:25:15.737 11:48:44 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:15.737 11:48:44 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:25:15.737 11:48:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:15.737 11:48:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:15.737 11:48:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:15.737 11:48:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:15.737 11:48:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:15.737 11:48:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:15.737 11:48:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:15.737 11:48:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:15.737 11:48:44 -- nvmf/common.sh@104 -- # continue 2 00:25:15.737 11:48:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:15.737 11:48:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:15.737 11:48:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:15.737 11:48:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:15.737 11:48:44 -- nvmf/common.sh@104 -- # continue 2 00:25:15.737 11:48:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:15.737 11:48:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:25:15.737 11:48:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:15.737 11:48:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:15.737 11:48:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:15.737 11:48:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:15.737 11:48:44 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:25:15.737 11:48:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:25:15.737 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:15.737 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:15.737 altname enp217s0f0np0 00:25:15.737 altname ens818f0np0 00:25:15.737 inet 192.168.100.8/24 scope global mlx_0_0 00:25:15.737 valid_lft forever preferred_lft forever 00:25:15.737 11:48:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:15.737 11:48:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:25:15.737 11:48:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:15.737 11:48:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:15.737 11:48:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:15.737 11:48:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:15.737 11:48:44 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:25:15.737 11:48:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:25:15.737 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:15.737 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:15.737 altname enp217s0f1np1 00:25:15.737 altname ens818f1np1 00:25:15.737 inet 192.168.100.9/24 scope global mlx_0_1 00:25:15.737 valid_lft forever preferred_lft forever 00:25:15.737 11:48:44 -- nvmf/common.sh@410 -- # return 0 00:25:15.737 11:48:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:15.737 11:48:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:15.737 11:48:44 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:25:15.737 11:48:44 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:25:15.737 11:48:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:15.737 11:48:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:15.737 11:48:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:15.737 11:48:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:15.737 11:48:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:15.737 11:48:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:15.737 11:48:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:15.737 11:48:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:15.737 11:48:44 -- nvmf/common.sh@104 -- # continue 2 00:25:15.737 11:48:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:15.737 11:48:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:15.737 11:48:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:15.737 11:48:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:15.737 11:48:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:15.737 11:48:44 -- nvmf/common.sh@104 -- # continue 2 00:25:15.737 11:48:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:15.737 11:48:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:25:15.737 11:48:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:15.737 11:48:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:15.737 11:48:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:15.737 11:48:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:15.737 11:48:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:15.738 11:48:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:25:15.738 11:48:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:15.738 11:48:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:15.738 11:48:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:15.738 11:48:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:15.738 11:48:44 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:25:15.738 192.168.100.9' 00:25:15.738 11:48:44 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:25:15.738 192.168.100.9' 00:25:15.738 11:48:44 -- nvmf/common.sh@445 -- # head -n 1 00:25:15.738 11:48:44 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:15.738 11:48:44 -- nvmf/common.sh@446 -- # tail -n +2 00:25:15.738 11:48:44 -- nvmf/common.sh@446 -- # head -n 1 00:25:15.738 11:48:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:15.738 192.168.100.9' 00:25:15.738 11:48:44 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:15.738 11:48:44 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:25:15.738 11:48:44 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:15.738 11:48:44 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:25:15.738 11:48:44 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:25:15.738 11:48:44 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:25:15.738 11:48:44 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:15.738 11:48:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:15.738 11:48:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:15.738 11:48:44 -- common/autotest_common.sh@10 -- # set +x 00:25:15.738 11:48:44 -- nvmf/common.sh@469 -- # nvmfpid=2470595 00:25:15.738 11:48:44 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:15.738 11:48:44 -- nvmf/common.sh@470 -- # waitforlisten 2470595 00:25:15.738 11:48:44 -- common/autotest_common.sh@819 -- # '[' -z 2470595 ']' 00:25:15.738 11:48:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.738 11:48:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:15.738 11:48:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.738 11:48:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:15.738 11:48:44 -- common/autotest_common.sh@10 -- # set +x 00:25:15.738 [2024-07-21 11:48:45.020812] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:15.738 [2024-07-21 11:48:45.020864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.738 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.738 [2024-07-21 11:48:45.106116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:15.738 [2024-07-21 11:48:45.144994] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:15.738 [2024-07-21 11:48:45.145101] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.738 [2024-07-21 11:48:45.145112] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.738 [2024-07-21 11:48:45.145121] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.738 [2024-07-21 11:48:45.145171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.738 [2024-07-21 11:48:45.145267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.738 [2024-07-21 11:48:45.145294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:15.738 [2024-07-21 11:48:45.145295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.695 11:48:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:16.695 11:48:45 -- common/autotest_common.sh@852 -- # return 0 00:25:16.695 11:48:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:16.695 11:48:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:16.695 11:48:45 -- common/autotest_common.sh@10 -- # set +x 00:25:16.695 11:48:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.695 11:48:45 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:16.695 11:48:45 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:19.974 11:48:48 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:19.974 11:48:48 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:19.974 11:48:49 -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:25:19.974 11:48:49 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:19.974 11:48:49 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:19.974 11:48:49 -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:25:19.974 11:48:49 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:19.974 11:48:49 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:25:19.974 11:48:49 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:25:20.232 [2024-07-21 11:48:49.456941] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:25:20.232 [2024-07-21 11:48:49.478353] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1991920/0x199fdc0) succeed. 00:25:20.232 [2024-07-21 11:48:49.488694] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1992f10/0x1a3fec0) succeed. 00:25:20.232 11:48:49 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:20.489 11:48:49 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:20.489 11:48:49 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:20.747 11:48:49 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:20.747 11:48:49 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:20.747 11:48:50 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:21.005 [2024-07-21 11:48:50.277809] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:21.005 11:48:50 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:21.262 11:48:50 -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:25:21.262 11:48:50 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:21.262 11:48:50 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:21.262 11:48:50 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:22.634 Initializing NVMe Controllers 00:25:22.634 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:25:22.634 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:25:22.634 Initialization complete. Launching workers. 00:25:22.634 ======================================================== 00:25:22.634 Latency(us) 00:25:22.634 Device Information : IOPS MiB/s Average min max 00:25:22.634 PCIE (0000:d8:00.0) NSID 1 from core 0: 104210.00 407.07 306.70 28.57 7197.19 00:25:22.634 ======================================================== 00:25:22.634 Total : 104210.00 407.07 306.70 28.57 7197.19 00:25:22.634 00:25:22.634 11:48:51 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:22.634 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.910 Initializing NVMe Controllers 00:25:25.910 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:25.910 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:25.910 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:25.910 Initialization complete. Launching workers. 00:25:25.910 ======================================================== 00:25:25.910 Latency(us) 00:25:25.910 Device Information : IOPS MiB/s Average min max 00:25:25.910 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6786.99 26.51 147.14 47.75 7026.60 00:25:25.910 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5229.50 20.43 190.26 71.22 7087.97 00:25:25.910 ======================================================== 00:25:25.910 Total : 12016.49 46.94 165.90 47.75 7087.97 00:25:25.910 00:25:25.910 11:48:55 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:25.910 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.191 Initializing NVMe Controllers 00:25:29.191 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:29.191 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:29.191 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:29.191 Initialization complete. Launching workers. 00:25:29.191 ======================================================== 00:25:29.191 Latency(us) 00:25:29.191 Device Information : IOPS MiB/s Average min max 00:25:29.191 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19328.00 75.50 1655.71 467.11 7248.19 00:25:29.191 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7971.35 5845.03 10159.58 00:25:29.191 ======================================================== 00:25:29.191 Total : 23360.00 91.25 2745.81 467.11 10159.58 00:25:29.191 00:25:29.191 11:48:58 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:25:29.191 11:48:58 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:29.191 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.455 Initializing NVMe Controllers 00:25:34.455 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:34.455 Controller IO queue size 128, less than required. 00:25:34.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:34.455 Controller IO queue size 128, less than required. 00:25:34.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:34.455 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:34.455 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:34.455 Initialization complete. Launching workers. 00:25:34.455 ======================================================== 00:25:34.455 Latency(us) 00:25:34.455 Device Information : IOPS MiB/s Average min max 00:25:34.455 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4041.76 1010.44 31783.54 14360.69 72955.75 00:25:34.455 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4099.74 1024.93 30982.63 15405.90 48523.90 00:25:34.455 ======================================================== 00:25:34.455 Total : 8141.50 2035.37 31380.23 14360.69 72955.75 00:25:34.455 00:25:34.455 11:49:02 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:25:34.455 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.455 No valid NVMe controllers or AIO or URING devices found 00:25:34.455 Initializing NVMe Controllers 00:25:34.455 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:34.455 Controller IO queue size 128, less than required. 00:25:34.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:34.455 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:34.455 Controller IO queue size 128, less than required. 00:25:34.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:34.455 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:34.455 WARNING: Some requested NVMe devices were skipped 00:25:34.455 11:49:03 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:25:34.455 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.630 Initializing NVMe Controllers 00:25:38.630 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:38.630 Controller IO queue size 128, less than required. 00:25:38.630 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:38.630 Controller IO queue size 128, less than required. 00:25:38.630 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:38.630 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:38.630 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:38.630 Initialization complete. Launching workers. 00:25:38.630 00:25:38.630 ==================== 00:25:38.630 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:38.630 RDMA transport: 00:25:38.630 dev name: mlx5_0 00:25:38.630 polls: 423244 00:25:38.630 idle_polls: 419308 00:25:38.630 completions: 45943 00:25:38.630 queued_requests: 1 00:25:38.630 total_send_wrs: 23035 00:25:38.630 send_doorbell_updates: 3738 00:25:38.630 total_recv_wrs: 23035 00:25:38.630 recv_doorbell_updates: 3738 00:25:38.630 --------------------------------- 00:25:38.630 00:25:38.630 ==================== 00:25:38.630 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:38.630 RDMA transport: 00:25:38.630 dev name: mlx5_0 00:25:38.630 polls: 425275 00:25:38.630 idle_polls: 424997 00:25:38.630 completions: 20231 00:25:38.630 queued_requests: 1 00:25:38.630 total_send_wrs: 10179 00:25:38.630 send_doorbell_updates: 256 00:25:38.630 total_recv_wrs: 10179 00:25:38.630 recv_doorbell_updates: 256 00:25:38.630 --------------------------------- 00:25:38.630 ======================================================== 00:25:38.630 Latency(us) 00:25:38.630 Device Information : IOPS MiB/s Average min max 00:25:38.630 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5790.50 1447.62 22182.32 9904.41 49566.14 00:25:38.630 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2576.50 644.12 49896.78 26198.59 76273.23 00:25:38.630 ======================================================== 00:25:38.630 Total : 8367.00 2091.75 30716.60 9904.41 76273.23 00:25:38.630 00:25:38.630 11:49:07 -- host/perf.sh@66 -- # sync 00:25:38.630 11:49:07 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:38.630 11:49:07 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:38.630 11:49:07 -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:25:38.630 11:49:07 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:45.249 11:49:13 -- host/perf.sh@72 -- # ls_guid=36278bef-bb6d-4999-88bb-e633be39a491 00:25:45.249 11:49:13 -- host/perf.sh@73 -- # get_lvs_free_mb 36278bef-bb6d-4999-88bb-e633be39a491 00:25:45.249 11:49:13 -- common/autotest_common.sh@1343 -- # local lvs_uuid=36278bef-bb6d-4999-88bb-e633be39a491 00:25:45.249 11:49:13 -- common/autotest_common.sh@1344 -- # local lvs_info 00:25:45.249 11:49:13 -- common/autotest_common.sh@1345 -- # local fc 00:25:45.249 11:49:13 -- common/autotest_common.sh@1346 -- # local cs 00:25:45.249 11:49:13 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:45.249 11:49:13 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:25:45.249 { 00:25:45.249 "uuid": "36278bef-bb6d-4999-88bb-e633be39a491", 00:25:45.249 "name": "lvs_0", 00:25:45.249 "base_bdev": "Nvme0n1", 00:25:45.249 "total_data_clusters": 476466, 00:25:45.249 "free_clusters": 476466, 00:25:45.249 "block_size": 512, 00:25:45.249 "cluster_size": 4194304 00:25:45.249 } 00:25:45.249 ]' 00:25:45.249 11:49:13 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="36278bef-bb6d-4999-88bb-e633be39a491") .free_clusters' 00:25:45.249 11:49:14 -- common/autotest_common.sh@1348 -- # fc=476466 00:25:45.249 11:49:14 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="36278bef-bb6d-4999-88bb-e633be39a491") .cluster_size' 00:25:45.249 11:49:14 -- common/autotest_common.sh@1349 -- # cs=4194304 00:25:45.249 11:49:14 -- common/autotest_common.sh@1352 -- # free_mb=1905864 00:25:45.249 11:49:14 -- common/autotest_common.sh@1353 -- # echo 1905864 00:25:45.249 1905864 00:25:45.249 11:49:14 -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:25:45.249 11:49:14 -- host/perf.sh@78 -- # free_mb=20480 00:25:45.249 11:49:14 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 36278bef-bb6d-4999-88bb-e633be39a491 lbd_0 20480 00:25:45.249 11:49:14 -- host/perf.sh@80 -- # lb_guid=087e8a89-8a49-4753-93ce-d14487d1787a 00:25:45.249 11:49:14 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 087e8a89-8a49-4753-93ce-d14487d1787a lvs_n_0 00:25:47.144 11:49:16 -- host/perf.sh@83 -- # ls_nested_guid=17e017df-722a-4f45-a602-30f77826108f 00:25:47.144 11:49:16 -- host/perf.sh@84 -- # get_lvs_free_mb 17e017df-722a-4f45-a602-30f77826108f 00:25:47.144 11:49:16 -- common/autotest_common.sh@1343 -- # local lvs_uuid=17e017df-722a-4f45-a602-30f77826108f 00:25:47.144 11:49:16 -- common/autotest_common.sh@1344 -- # local lvs_info 00:25:47.144 11:49:16 -- common/autotest_common.sh@1345 -- # local fc 00:25:47.144 11:49:16 -- common/autotest_common.sh@1346 -- # local cs 00:25:47.145 11:49:16 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:47.402 11:49:16 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:25:47.402 { 00:25:47.402 "uuid": "36278bef-bb6d-4999-88bb-e633be39a491", 00:25:47.402 "name": "lvs_0", 00:25:47.402 "base_bdev": "Nvme0n1", 00:25:47.402 "total_data_clusters": 476466, 00:25:47.402 "free_clusters": 471346, 00:25:47.402 "block_size": 512, 00:25:47.402 "cluster_size": 4194304 00:25:47.402 }, 00:25:47.402 { 00:25:47.402 "uuid": "17e017df-722a-4f45-a602-30f77826108f", 00:25:47.402 "name": "lvs_n_0", 00:25:47.402 "base_bdev": "087e8a89-8a49-4753-93ce-d14487d1787a", 00:25:47.402 "total_data_clusters": 5114, 00:25:47.402 "free_clusters": 5114, 00:25:47.402 "block_size": 512, 00:25:47.402 "cluster_size": 4194304 00:25:47.402 } 00:25:47.402 ]' 00:25:47.402 11:49:16 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="17e017df-722a-4f45-a602-30f77826108f") .free_clusters' 00:25:47.402 11:49:16 -- common/autotest_common.sh@1348 -- # fc=5114 00:25:47.402 11:49:16 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="17e017df-722a-4f45-a602-30f77826108f") .cluster_size' 00:25:47.402 11:49:16 -- common/autotest_common.sh@1349 -- # cs=4194304 00:25:47.402 11:49:16 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:25:47.402 11:49:16 -- common/autotest_common.sh@1353 -- # echo 20456 00:25:47.402 20456 00:25:47.402 11:49:16 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:47.402 11:49:16 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 17e017df-722a-4f45-a602-30f77826108f lbd_nest_0 20456 00:25:47.659 11:49:16 -- host/perf.sh@88 -- # lb_nested_guid=c476214b-7374-4d6e-a26d-82277b5c8600 00:25:47.659 11:49:16 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:47.954 11:49:17 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:47.954 11:49:17 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 c476214b-7374-4d6e-a26d-82277b5c8600 00:25:47.954 11:49:17 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:48.211 11:49:17 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:48.211 11:49:17 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:48.211 11:49:17 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:48.211 11:49:17 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:48.211 11:49:17 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:48.211 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.399 Initializing NVMe Controllers 00:26:00.399 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:00.399 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:00.399 Initialization complete. Launching workers. 00:26:00.399 ======================================================== 00:26:00.399 Latency(us) 00:26:00.399 Device Information : IOPS MiB/s Average min max 00:26:00.399 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5922.28 2.89 168.38 67.40 7050.88 00:26:00.399 ======================================================== 00:26:00.399 Total : 5922.28 2.89 168.38 67.40 7050.88 00:26:00.399 00:26:00.399 11:49:28 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:00.399 11:49:28 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:00.399 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.603 Initializing NVMe Controllers 00:26:12.603 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:12.603 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:12.603 Initialization complete. Launching workers. 00:26:12.603 ======================================================== 00:26:12.603 Latency(us) 00:26:12.603 Device Information : IOPS MiB/s Average min max 00:26:12.603 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2669.45 333.68 374.11 156.76 8139.13 00:26:12.604 ======================================================== 00:26:12.604 Total : 2669.45 333.68 374.11 156.76 8139.13 00:26:12.604 00:26:12.604 11:49:40 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:12.604 11:49:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:12.604 11:49:40 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:12.604 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.567 Initializing NVMe Controllers 00:26:22.567 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:22.567 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:22.567 Initialization complete. Launching workers. 00:26:22.567 ======================================================== 00:26:22.567 Latency(us) 00:26:22.567 Device Information : IOPS MiB/s Average min max 00:26:22.567 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12109.50 5.91 2642.43 900.34 9759.19 00:26:22.567 ======================================================== 00:26:22.567 Total : 12109.50 5.91 2642.43 900.34 9759.19 00:26:22.567 00:26:22.567 11:49:51 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:22.568 11:49:51 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:22.568 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.792 Initializing NVMe Controllers 00:26:34.792 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:34.792 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:34.792 Initialization complete. Launching workers. 00:26:34.792 ======================================================== 00:26:34.792 Latency(us) 00:26:34.792 Device Information : IOPS MiB/s Average min max 00:26:34.792 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3982.00 497.75 8040.41 3909.58 16030.77 00:26:34.792 ======================================================== 00:26:34.792 Total : 3982.00 497.75 8040.41 3909.58 16030.77 00:26:34.792 00:26:34.792 11:50:02 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:34.792 11:50:02 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:34.792 11:50:02 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:34.792 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.973 Initializing NVMe Controllers 00:26:46.973 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:46.973 Controller IO queue size 128, less than required. 00:26:46.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:46.973 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:46.973 Initialization complete. Launching workers. 00:26:46.973 ======================================================== 00:26:46.973 Latency(us) 00:26:46.973 Device Information : IOPS MiB/s Average min max 00:26:46.973 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19487.33 9.52 6570.78 1860.36 15829.55 00:26:46.973 ======================================================== 00:26:46.973 Total : 19487.33 9.52 6570.78 1860.36 15829.55 00:26:46.973 00:26:46.973 11:50:14 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:46.973 11:50:14 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:46.973 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.944 Initializing NVMe Controllers 00:26:56.944 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:56.944 Controller IO queue size 128, less than required. 00:26:56.944 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:56.944 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:56.944 Initialization complete. Launching workers. 00:26:56.944 ======================================================== 00:26:56.944 Latency(us) 00:26:56.944 Device Information : IOPS MiB/s Average min max 00:26:56.944 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11290.80 1411.35 11338.58 3317.48 24775.35 00:26:56.944 ======================================================== 00:26:56.944 Total : 11290.80 1411.35 11338.58 3317.48 24775.35 00:26:56.944 00:26:56.944 11:50:25 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:56.944 11:50:25 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c476214b-7374-4d6e-a26d-82277b5c8600 00:26:57.201 11:50:26 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:26:57.201 11:50:26 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 087e8a89-8a49-4753-93ce-d14487d1787a 00:26:57.458 11:50:26 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:26:57.722 11:50:27 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:57.722 11:50:27 -- host/perf.sh@114 -- # nvmftestfini 00:26:57.722 11:50:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:57.722 11:50:27 -- nvmf/common.sh@116 -- # sync 00:26:57.722 11:50:27 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:26:57.722 11:50:27 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:26:57.722 11:50:27 -- nvmf/common.sh@119 -- # set +e 00:26:57.722 11:50:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:57.722 11:50:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:26:57.722 rmmod nvme_rdma 00:26:57.722 rmmod nvme_fabrics 00:26:57.722 11:50:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:57.722 11:50:27 -- nvmf/common.sh@123 -- # set -e 00:26:57.722 11:50:27 -- nvmf/common.sh@124 -- # return 0 00:26:57.722 11:50:27 -- nvmf/common.sh@477 -- # '[' -n 2470595 ']' 00:26:57.722 11:50:27 -- nvmf/common.sh@478 -- # killprocess 2470595 00:26:57.722 11:50:27 -- common/autotest_common.sh@926 -- # '[' -z 2470595 ']' 00:26:57.722 11:50:27 -- common/autotest_common.sh@930 -- # kill -0 2470595 00:26:57.722 11:50:27 -- common/autotest_common.sh@931 -- # uname 00:26:57.722 11:50:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:57.722 11:50:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2470595 00:26:57.722 11:50:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:57.722 11:50:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:57.722 11:50:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2470595' 00:26:57.722 killing process with pid 2470595 00:26:57.722 11:50:27 -- common/autotest_common.sh@945 -- # kill 2470595 00:26:57.722 11:50:27 -- common/autotest_common.sh@950 -- # wait 2470595 00:27:00.243 11:50:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:00.500 11:50:29 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:00.500 00:27:00.500 real 1m52.944s 00:27:00.500 user 7m1.835s 00:27:00.500 sys 0m8.188s 00:27:00.500 11:50:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:00.500 11:50:29 -- common/autotest_common.sh@10 -- # set +x 00:27:00.500 ************************************ 00:27:00.500 END TEST nvmf_perf 00:27:00.500 ************************************ 00:27:00.500 11:50:29 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:27:00.500 11:50:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:00.500 11:50:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:00.500 11:50:29 -- common/autotest_common.sh@10 -- # set +x 00:27:00.500 ************************************ 00:27:00.500 START TEST nvmf_fio_host 00:27:00.500 ************************************ 00:27:00.500 11:50:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:27:00.500 * Looking for test storage... 00:27:00.500 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:00.500 11:50:29 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:00.500 11:50:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.500 11:50:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.500 11:50:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.500 11:50:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.500 11:50:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.500 11:50:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.500 11:50:29 -- paths/export.sh@5 -- # export PATH 00:27:00.500 11:50:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.500 11:50:29 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.500 11:50:29 -- nvmf/common.sh@7 -- # uname -s 00:27:00.500 11:50:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.500 11:50:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.500 11:50:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.500 11:50:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.500 11:50:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.500 11:50:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.500 11:50:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.500 11:50:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.500 11:50:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.500 11:50:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.500 11:50:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:00.500 11:50:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:00.500 11:50:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.500 11:50:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.500 11:50:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.500 11:50:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:00.500 11:50:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.500 11:50:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.500 11:50:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.500 11:50:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.500 11:50:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.500 11:50:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.500 11:50:29 -- paths/export.sh@5 -- # export PATH 00:27:00.500 11:50:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.500 11:50:29 -- nvmf/common.sh@46 -- # : 0 00:27:00.500 11:50:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:00.500 11:50:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:00.500 11:50:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:00.500 11:50:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.500 11:50:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.500 11:50:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:00.500 11:50:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:00.500 11:50:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:00.500 11:50:29 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:00.501 11:50:29 -- host/fio.sh@14 -- # nvmftestinit 00:27:00.501 11:50:29 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:00.501 11:50:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:00.501 11:50:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:00.501 11:50:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:00.501 11:50:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:00.501 11:50:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.501 11:50:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:00.501 11:50:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.501 11:50:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:00.501 11:50:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:00.501 11:50:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:00.501 11:50:29 -- common/autotest_common.sh@10 -- # set +x 00:27:08.602 11:50:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:08.602 11:50:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:08.602 11:50:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:08.602 11:50:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:08.602 11:50:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:08.602 11:50:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:08.602 11:50:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:08.602 11:50:38 -- nvmf/common.sh@294 -- # net_devs=() 00:27:08.602 11:50:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:08.602 11:50:38 -- nvmf/common.sh@295 -- # e810=() 00:27:08.602 11:50:38 -- nvmf/common.sh@295 -- # local -ga e810 00:27:08.602 11:50:38 -- nvmf/common.sh@296 -- # x722=() 00:27:08.602 11:50:38 -- nvmf/common.sh@296 -- # local -ga x722 00:27:08.602 11:50:38 -- nvmf/common.sh@297 -- # mlx=() 00:27:08.602 11:50:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:08.602 11:50:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:08.602 11:50:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:08.602 11:50:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:08.602 11:50:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:08.602 11:50:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:08.602 11:50:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:08.602 11:50:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:08.602 11:50:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:08.602 11:50:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:08.602 11:50:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:08.602 11:50:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:08.602 11:50:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:08.602 11:50:38 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:27:08.602 11:50:38 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:27:08.602 11:50:38 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:27:08.602 11:50:38 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:27:08.602 11:50:38 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:27:08.602 11:50:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:08.602 11:50:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:08.602 11:50:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:08.602 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:08.602 11:50:38 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:08.602 11:50:38 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:08.602 11:50:38 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:08.602 11:50:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:08.602 11:50:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:08.602 11:50:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:08.602 11:50:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:08.602 11:50:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:08.602 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:08.602 11:50:38 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:08.602 11:50:38 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:08.602 11:50:38 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:08.602 11:50:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:08.602 11:50:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:08.602 11:50:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:08.602 11:50:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:08.602 11:50:38 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:27:08.602 11:50:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:08.602 11:50:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.602 11:50:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:08.602 11:50:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.602 11:50:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:08.602 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:08.602 11:50:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.602 11:50:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:08.602 11:50:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.602 11:50:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:08.602 11:50:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.602 11:50:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:08.602 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:08.602 11:50:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.602 11:50:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:08.602 11:50:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:08.602 11:50:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:08.602 11:50:38 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:27:08.602 11:50:38 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:27:08.602 11:50:38 -- nvmf/common.sh@408 -- # rdma_device_init 00:27:08.602 11:50:38 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:27:08.602 11:50:38 -- nvmf/common.sh@57 -- # uname 00:27:08.602 11:50:38 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:27:08.602 11:50:38 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:27:08.860 11:50:38 -- nvmf/common.sh@62 -- # modprobe ib_core 00:27:08.860 11:50:38 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:27:08.860 11:50:38 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:27:08.860 11:50:38 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:27:08.860 11:50:38 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:27:08.860 11:50:38 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:27:08.860 11:50:38 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:27:08.860 11:50:38 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:08.860 11:50:38 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:27:08.860 11:50:38 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:08.860 11:50:38 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:08.860 11:50:38 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:08.860 11:50:38 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:08.860 11:50:38 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:08.860 11:50:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:08.860 11:50:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:08.860 11:50:38 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:08.860 11:50:38 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:08.860 11:50:38 -- nvmf/common.sh@104 -- # continue 2 00:27:08.860 11:50:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:08.860 11:50:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:08.860 11:50:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:08.860 11:50:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:08.860 11:50:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:08.860 11:50:38 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:08.860 11:50:38 -- nvmf/common.sh@104 -- # continue 2 00:27:08.860 11:50:38 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:08.860 11:50:38 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:27:08.860 11:50:38 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:08.860 11:50:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:08.860 11:50:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:08.860 11:50:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:08.860 11:50:38 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:27:08.860 11:50:38 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:27:08.860 11:50:38 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:27:08.860 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:08.860 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:08.860 altname enp217s0f0np0 00:27:08.860 altname ens818f0np0 00:27:08.860 inet 192.168.100.8/24 scope global mlx_0_0 00:27:08.860 valid_lft forever preferred_lft forever 00:27:08.860 11:50:38 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:08.860 11:50:38 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:27:08.860 11:50:38 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:08.860 11:50:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:08.860 11:50:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:08.860 11:50:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:08.860 11:50:38 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:27:08.860 11:50:38 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:27:08.860 11:50:38 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:27:08.860 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:08.860 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:08.860 altname enp217s0f1np1 00:27:08.860 altname ens818f1np1 00:27:08.860 inet 192.168.100.9/24 scope global mlx_0_1 00:27:08.860 valid_lft forever preferred_lft forever 00:27:08.860 11:50:38 -- nvmf/common.sh@410 -- # return 0 00:27:08.860 11:50:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:08.860 11:50:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:08.860 11:50:38 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:27:08.860 11:50:38 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:27:08.860 11:50:38 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:27:08.860 11:50:38 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:08.860 11:50:38 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:08.860 11:50:38 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:08.860 11:50:38 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:08.860 11:50:38 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:08.860 11:50:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:08.860 11:50:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:08.860 11:50:38 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:08.860 11:50:38 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:08.860 11:50:38 -- nvmf/common.sh@104 -- # continue 2 00:27:08.860 11:50:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:08.860 11:50:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:08.861 11:50:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:08.861 11:50:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:08.861 11:50:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:08.861 11:50:38 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:08.861 11:50:38 -- nvmf/common.sh@104 -- # continue 2 00:27:08.861 11:50:38 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:08.861 11:50:38 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:27:08.861 11:50:38 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:08.861 11:50:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:08.861 11:50:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:08.861 11:50:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:08.861 11:50:38 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:08.861 11:50:38 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:27:08.861 11:50:38 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:08.861 11:50:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:08.861 11:50:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:08.861 11:50:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:08.861 11:50:38 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:27:08.861 192.168.100.9' 00:27:08.861 11:50:38 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:27:08.861 192.168.100.9' 00:27:08.861 11:50:38 -- nvmf/common.sh@445 -- # head -n 1 00:27:08.861 11:50:38 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:08.861 11:50:38 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:27:08.861 192.168.100.9' 00:27:08.861 11:50:38 -- nvmf/common.sh@446 -- # tail -n +2 00:27:08.861 11:50:38 -- nvmf/common.sh@446 -- # head -n 1 00:27:08.861 11:50:38 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:08.861 11:50:38 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:27:08.861 11:50:38 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:08.861 11:50:38 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:27:08.861 11:50:38 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:27:08.861 11:50:38 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:27:08.861 11:50:38 -- host/fio.sh@16 -- # [[ y != y ]] 00:27:08.861 11:50:38 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:08.861 11:50:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:08.861 11:50:38 -- common/autotest_common.sh@10 -- # set +x 00:27:08.861 11:50:38 -- host/fio.sh@24 -- # nvmfpid=2492098 00:27:08.861 11:50:38 -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:08.861 11:50:38 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:08.861 11:50:38 -- host/fio.sh@28 -- # waitforlisten 2492098 00:27:08.861 11:50:38 -- common/autotest_common.sh@819 -- # '[' -z 2492098 ']' 00:27:08.861 11:50:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.861 11:50:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:08.861 11:50:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.861 11:50:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:08.861 11:50:38 -- common/autotest_common.sh@10 -- # set +x 00:27:09.118 [2024-07-21 11:50:38.313050] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:09.118 [2024-07-21 11:50:38.313102] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.118 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.118 [2024-07-21 11:50:38.400184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:09.118 [2024-07-21 11:50:38.438208] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:09.118 [2024-07-21 11:50:38.438339] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.118 [2024-07-21 11:50:38.438350] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.118 [2024-07-21 11:50:38.438359] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.118 [2024-07-21 11:50:38.438407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.118 [2024-07-21 11:50:38.438498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:09.118 [2024-07-21 11:50:38.438526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:09.118 [2024-07-21 11:50:38.438528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.047 11:50:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:10.047 11:50:39 -- common/autotest_common.sh@852 -- # return 0 00:27:10.047 11:50:39 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:10.047 [2024-07-21 11:50:39.278500] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xca54b0/0xca99a0) succeed. 00:27:10.047 [2024-07-21 11:50:39.289046] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xca6aa0/0xceb030) succeed. 00:27:10.047 11:50:39 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:10.047 11:50:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:10.047 11:50:39 -- common/autotest_common.sh@10 -- # set +x 00:27:10.303 11:50:39 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:10.303 Malloc1 00:27:10.303 11:50:39 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:10.558 11:50:39 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:10.813 11:50:40 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:10.813 [2024-07-21 11:50:40.194926] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:10.813 11:50:40 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:11.069 11:50:40 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:27:11.069 11:50:40 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:11.069 11:50:40 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:11.069 11:50:40 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:11.069 11:50:40 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:11.069 11:50:40 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:11.069 11:50:40 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:11.069 11:50:40 -- common/autotest_common.sh@1320 -- # shift 00:27:11.069 11:50:40 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:11.069 11:50:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.069 11:50:40 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:11.069 11:50:40 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:11.069 11:50:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:11.069 11:50:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:11.069 11:50:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:11.069 11:50:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.069 11:50:40 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:11.069 11:50:40 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:11.069 11:50:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:11.069 11:50:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:11.069 11:50:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:11.069 11:50:40 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:11.069 11:50:40 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:11.324 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:11.324 fio-3.35 00:27:11.324 Starting 1 thread 00:27:11.579 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.093 00:27:14.093 test: (groupid=0, jobs=1): err= 0: pid=2492781: Sun Jul 21 11:50:43 2024 00:27:14.093 read: IOPS=18.8k, BW=73.6MiB/s (77.2MB/s)(147MiB/2003msec) 00:27:14.093 slat (nsec): min=1338, max=23612, avg=1528.86, stdev=586.85 00:27:14.093 clat (usec): min=1564, max=6075, avg=3373.21, stdev=71.70 00:27:14.093 lat (usec): min=1580, max=6076, avg=3374.74, stdev=71.63 00:27:14.093 clat percentiles (usec): 00:27:14.093 | 1.00th=[ 3326], 5.00th=[ 3359], 10.00th=[ 3359], 20.00th=[ 3359], 00:27:14.093 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3392], 00:27:14.093 | 70.00th=[ 3392], 80.00th=[ 3392], 90.00th=[ 3392], 95.00th=[ 3392], 00:27:14.093 | 99.00th=[ 3425], 99.50th=[ 3425], 99.90th=[ 4293], 99.95th=[ 5145], 00:27:14.093 | 99.99th=[ 6063] 00:27:14.093 bw ( KiB/s): min=73768, max=75992, per=99.99%, avg=75348.00, stdev=1057.32, samples=4 00:27:14.093 iops : min=18442, max=18998, avg=18837.00, stdev=264.33, samples=4 00:27:14.093 write: IOPS=18.9k, BW=73.6MiB/s (77.2MB/s)(147MiB/2003msec); 0 zone resets 00:27:14.093 slat (nsec): min=1396, max=17605, avg=1617.72, stdev=602.61 00:27:14.093 clat (usec): min=2321, max=6063, avg=3372.02, stdev=76.37 00:27:14.093 lat (usec): min=2331, max=6064, avg=3373.64, stdev=76.30 00:27:14.093 clat percentiles (usec): 00:27:14.093 | 1.00th=[ 3326], 5.00th=[ 3359], 10.00th=[ 3359], 20.00th=[ 3359], 00:27:14.093 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3359], 00:27:14.093 | 70.00th=[ 3392], 80.00th=[ 3392], 90.00th=[ 3392], 95.00th=[ 3392], 00:27:14.093 | 99.00th=[ 3425], 99.50th=[ 3425], 99.90th=[ 4359], 99.95th=[ 5604], 00:27:14.093 | 99.99th=[ 5997] 00:27:14.093 bw ( KiB/s): min=73744, max=75976, per=99.98%, avg=75388.00, stdev=1096.60, samples=4 00:27:14.093 iops : min=18436, max=18994, avg=18847.00, stdev=274.15, samples=4 00:27:14.093 lat (msec) : 2=0.01%, 4=99.89%, 10=0.11% 00:27:14.093 cpu : usr=99.50%, sys=0.10%, ctx=17, majf=0, minf=2 00:27:14.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:14.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:14.093 issued rwts: total=37736,37759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.093 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:14.093 00:27:14.093 Run status group 0 (all jobs): 00:27:14.093 READ: bw=73.6MiB/s (77.2MB/s), 73.6MiB/s-73.6MiB/s (77.2MB/s-77.2MB/s), io=147MiB (155MB), run=2003-2003msec 00:27:14.093 WRITE: bw=73.6MiB/s (77.2MB/s), 73.6MiB/s-73.6MiB/s (77.2MB/s-77.2MB/s), io=147MiB (155MB), run=2003-2003msec 00:27:14.093 11:50:43 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:14.093 11:50:43 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:14.093 11:50:43 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:14.093 11:50:43 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:14.093 11:50:43 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:14.093 11:50:43 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:14.093 11:50:43 -- common/autotest_common.sh@1320 -- # shift 00:27:14.093 11:50:43 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:14.093 11:50:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:14.093 11:50:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:14.093 11:50:43 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:14.093 11:50:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:14.093 11:50:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:14.093 11:50:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:14.093 11:50:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:14.093 11:50:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:14.093 11:50:43 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:14.093 11:50:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:14.093 11:50:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:14.093 11:50:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:14.093 11:50:43 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:14.093 11:50:43 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:14.093 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:14.093 fio-3.35 00:27:14.093 Starting 1 thread 00:27:14.093 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.610 00:27:16.610 test: (groupid=0, jobs=1): err= 0: pid=2493443: Sun Jul 21 11:50:45 2024 00:27:16.610 read: IOPS=15.0k, BW=234MiB/s (245MB/s)(463MiB/1979msec) 00:27:16.610 slat (nsec): min=2225, max=36198, avg=2561.74, stdev=950.05 00:27:16.610 clat (usec): min=462, max=8038, avg=1596.67, stdev=1283.40 00:27:16.610 lat (usec): min=464, max=8053, avg=1599.23, stdev=1283.74 00:27:16.610 clat percentiles (usec): 00:27:16.610 | 1.00th=[ 652], 5.00th=[ 750], 10.00th=[ 807], 20.00th=[ 881], 00:27:16.610 | 30.00th=[ 955], 40.00th=[ 1037], 50.00th=[ 1139], 60.00th=[ 1254], 00:27:16.610 | 70.00th=[ 1385], 80.00th=[ 1582], 90.00th=[ 4555], 95.00th=[ 4752], 00:27:16.610 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[ 7111], 99.95th=[ 7308], 00:27:16.610 | 99.99th=[ 8029] 00:27:16.610 bw ( KiB/s): min=103424, max=123360, per=48.75%, avg=116672.00, stdev=9093.29, samples=4 00:27:16.610 iops : min= 6464, max= 7710, avg=7292.00, stdev=568.33, samples=4 00:27:16.610 write: IOPS=8488, BW=133MiB/s (139MB/s)(237MiB/1784msec); 0 zone resets 00:27:16.610 slat (nsec): min=26358, max=96832, avg=28872.43, stdev=5328.42 00:27:16.610 clat (usec): min=3862, max=19088, avg=12084.31, stdev=1755.75 00:27:16.610 lat (usec): min=3890, max=19117, avg=12113.18, stdev=1755.53 00:27:16.610 clat percentiles (usec): 00:27:16.610 | 1.00th=[ 5866], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10814], 00:27:16.610 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12125], 60.00th=[12518], 00:27:16.610 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14091], 95.00th=[14877], 00:27:16.610 | 99.00th=[16188], 99.50th=[16581], 99.90th=[18482], 99.95th=[18744], 00:27:16.610 | 99.99th=[19006] 00:27:16.610 bw ( KiB/s): min=106048, max=128352, per=88.49%, avg=120184.00, stdev=10136.39, samples=4 00:27:16.610 iops : min= 6628, max= 8022, avg=7511.50, stdev=633.52, samples=4 00:27:16.610 lat (usec) : 500=0.01%, 750=3.33%, 1000=20.69% 00:27:16.610 lat (msec) : 2=32.66%, 4=2.43%, 10=9.88%, 20=31.00% 00:27:16.610 cpu : usr=96.61%, sys=1.45%, ctx=226, majf=0, minf=1 00:27:16.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:16.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:16.610 issued rwts: total=29602,15143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:16.610 00:27:16.610 Run status group 0 (all jobs): 00:27:16.610 READ: bw=234MiB/s (245MB/s), 234MiB/s-234MiB/s (245MB/s-245MB/s), io=463MiB (485MB), run=1979-1979msec 00:27:16.610 WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=237MiB (248MB), run=1784-1784msec 00:27:16.610 11:50:45 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:16.869 11:50:46 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:27:16.869 11:50:46 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:27:16.869 11:50:46 -- host/fio.sh@51 -- # get_nvme_bdfs 00:27:16.869 11:50:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:16.869 11:50:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:16.869 11:50:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:16.869 11:50:46 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:16.869 11:50:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:16.869 11:50:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:16.869 11:50:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:27:16.869 11:50:46 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:27:20.140 Nvme0n1 00:27:20.140 11:50:49 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:25.386 11:50:54 -- host/fio.sh@53 -- # ls_guid=e03081de-35db-425a-8722-7a137ee78a7c 00:27:25.386 11:50:54 -- host/fio.sh@54 -- # get_lvs_free_mb e03081de-35db-425a-8722-7a137ee78a7c 00:27:25.386 11:50:54 -- common/autotest_common.sh@1343 -- # local lvs_uuid=e03081de-35db-425a-8722-7a137ee78a7c 00:27:25.386 11:50:54 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:25.386 11:50:54 -- common/autotest_common.sh@1345 -- # local fc 00:27:25.386 11:50:54 -- common/autotest_common.sh@1346 -- # local cs 00:27:25.386 11:50:54 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:25.644 11:50:54 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:25.644 { 00:27:25.644 "uuid": "e03081de-35db-425a-8722-7a137ee78a7c", 00:27:25.644 "name": "lvs_0", 00:27:25.644 "base_bdev": "Nvme0n1", 00:27:25.644 "total_data_clusters": 1862, 00:27:25.644 "free_clusters": 1862, 00:27:25.644 "block_size": 512, 00:27:25.644 "cluster_size": 1073741824 00:27:25.644 } 00:27:25.644 ]' 00:27:25.644 11:50:54 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="e03081de-35db-425a-8722-7a137ee78a7c") .free_clusters' 00:27:25.644 11:50:54 -- common/autotest_common.sh@1348 -- # fc=1862 00:27:25.644 11:50:54 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="e03081de-35db-425a-8722-7a137ee78a7c") .cluster_size' 00:27:25.644 11:50:54 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:27:25.644 11:50:54 -- common/autotest_common.sh@1352 -- # free_mb=1906688 00:27:25.644 11:50:54 -- common/autotest_common.sh@1353 -- # echo 1906688 00:27:25.644 1906688 00:27:25.644 11:50:54 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:27:26.225 2a34ac6e-1ed0-4565-8917-3d10592febc9 00:27:26.225 11:50:55 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:26.225 11:50:55 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:26.482 11:50:55 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:27:26.740 11:50:55 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:26.740 11:50:55 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:26.740 11:50:55 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:26.740 11:50:55 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:26.740 11:50:55 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:26.740 11:50:55 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:26.740 11:50:55 -- common/autotest_common.sh@1320 -- # shift 00:27:26.740 11:50:55 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:26.740 11:50:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:26.740 11:50:56 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:26.740 11:50:56 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:26.740 11:50:56 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:26.740 11:50:56 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:26.740 11:50:56 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:26.740 11:50:56 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:26.740 11:50:56 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:26.740 11:50:56 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:26.740 11:50:56 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:26.740 11:50:56 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:26.740 11:50:56 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:26.740 11:50:56 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:26.740 11:50:56 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:26.997 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:26.997 fio-3.35 00:27:26.997 Starting 1 thread 00:27:26.997 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.529 00:27:29.529 test: (groupid=0, jobs=1): err= 0: pid=2495760: Sun Jul 21 11:50:58 2024 00:27:29.529 read: IOPS=10.3k, BW=40.2MiB/s (42.2MB/s)(80.6MiB/2005msec) 00:27:29.529 slat (nsec): min=1335, max=17190, avg=1434.48, stdev=291.48 00:27:29.529 clat (usec): min=200, max=339529, avg=6170.12, stdev=18702.21 00:27:29.529 lat (usec): min=202, max=339532, avg=6171.56, stdev=18702.24 00:27:29.529 clat percentiles (msec): 00:27:29.529 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:27:29.529 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:27:29.529 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:27:29.529 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 338], 99.95th=[ 338], 00:27:29.529 | 99.99th=[ 338] 00:27:29.529 bw ( KiB/s): min=14920, max=50304, per=99.99%, avg=41174.00, stdev=17505.36, samples=4 00:27:29.529 iops : min= 3730, max=12576, avg=10293.50, stdev=4376.34, samples=4 00:27:29.529 write: IOPS=10.3k, BW=40.3MiB/s (42.2MB/s)(80.7MiB/2005msec); 0 zone resets 00:27:29.529 slat (nsec): min=1386, max=17259, avg=1548.82, stdev=274.44 00:27:29.529 clat (usec): min=168, max=339828, avg=6141.04, stdev=18174.88 00:27:29.529 lat (usec): min=169, max=339831, avg=6142.59, stdev=18174.94 00:27:29.529 clat percentiles (msec): 00:27:29.529 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:27:29.529 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:27:29.529 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:27:29.529 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 338], 99.95th=[ 342], 00:27:29.529 | 99.99th=[ 342] 00:27:29.529 bw ( KiB/s): min=15592, max=49968, per=99.90%, avg=41190.00, stdev=17066.30, samples=4 00:27:29.529 iops : min= 3898, max=12492, avg=10297.50, stdev=4266.57, samples=4 00:27:29.529 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:27:29.529 lat (msec) : 2=0.04%, 4=0.25%, 10=99.35%, 500=0.31% 00:27:29.529 cpu : usr=99.55%, sys=0.15%, ctx=16, majf=0, minf=11 00:27:29.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:29.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:29.529 issued rwts: total=20641,20668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:29.529 00:27:29.529 Run status group 0 (all jobs): 00:27:29.529 READ: bw=40.2MiB/s (42.2MB/s), 40.2MiB/s-40.2MiB/s (42.2MB/s-42.2MB/s), io=80.6MiB (84.5MB), run=2005-2005msec 00:27:29.529 WRITE: bw=40.3MiB/s (42.2MB/s), 40.3MiB/s-40.3MiB/s (42.2MB/s-42.2MB/s), io=80.7MiB (84.7MB), run=2005-2005msec 00:27:29.529 11:50:58 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:29.833 11:50:58 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:30.765 11:51:00 -- host/fio.sh@64 -- # ls_nested_guid=63dead56-62eb-48d0-8edb-bd0e8195ab0a 00:27:30.765 11:51:00 -- host/fio.sh@65 -- # get_lvs_free_mb 63dead56-62eb-48d0-8edb-bd0e8195ab0a 00:27:30.765 11:51:00 -- common/autotest_common.sh@1343 -- # local lvs_uuid=63dead56-62eb-48d0-8edb-bd0e8195ab0a 00:27:30.765 11:51:00 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:30.765 11:51:00 -- common/autotest_common.sh@1345 -- # local fc 00:27:30.765 11:51:00 -- common/autotest_common.sh@1346 -- # local cs 00:27:30.765 11:51:00 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:31.023 11:51:00 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:31.023 { 00:27:31.023 "uuid": "e03081de-35db-425a-8722-7a137ee78a7c", 00:27:31.023 "name": "lvs_0", 00:27:31.023 "base_bdev": "Nvme0n1", 00:27:31.023 "total_data_clusters": 1862, 00:27:31.023 "free_clusters": 0, 00:27:31.023 "block_size": 512, 00:27:31.023 "cluster_size": 1073741824 00:27:31.023 }, 00:27:31.023 { 00:27:31.023 "uuid": "63dead56-62eb-48d0-8edb-bd0e8195ab0a", 00:27:31.023 "name": "lvs_n_0", 00:27:31.023 "base_bdev": "2a34ac6e-1ed0-4565-8917-3d10592febc9", 00:27:31.023 "total_data_clusters": 476206, 00:27:31.023 "free_clusters": 476206, 00:27:31.023 "block_size": 512, 00:27:31.023 "cluster_size": 4194304 00:27:31.023 } 00:27:31.023 ]' 00:27:31.023 11:51:00 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="63dead56-62eb-48d0-8edb-bd0e8195ab0a") .free_clusters' 00:27:31.023 11:51:00 -- common/autotest_common.sh@1348 -- # fc=476206 00:27:31.023 11:51:00 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="63dead56-62eb-48d0-8edb-bd0e8195ab0a") .cluster_size' 00:27:31.280 11:51:00 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:31.280 11:51:00 -- common/autotest_common.sh@1352 -- # free_mb=1904824 00:27:31.280 11:51:00 -- common/autotest_common.sh@1353 -- # echo 1904824 00:27:31.280 1904824 00:27:31.280 11:51:00 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:27:31.843 d532c0ee-ce28-455e-894f-1eba31cb35ad 00:27:32.101 11:51:01 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:32.101 11:51:01 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:32.359 11:51:01 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:27:32.617 11:51:01 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:32.617 11:51:01 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:32.617 11:51:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:32.617 11:51:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:32.617 11:51:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:32.617 11:51:01 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:32.617 11:51:01 -- common/autotest_common.sh@1320 -- # shift 00:27:32.617 11:51:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:32.617 11:51:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:32.617 11:51:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:32.617 11:51:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:32.617 11:51:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:32.617 11:51:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:32.617 11:51:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:32.617 11:51:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:32.617 11:51:01 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:32.617 11:51:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:32.617 11:51:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:32.617 11:51:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:32.617 11:51:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:32.617 11:51:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:32.617 11:51:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:32.875 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:32.875 fio-3.35 00:27:32.875 Starting 1 thread 00:27:32.875 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.401 00:27:35.401 test: (groupid=0, jobs=1): err= 0: pid=2496860: Sun Jul 21 11:51:04 2024 00:27:35.401 read: IOPS=10.6k, BW=41.4MiB/s (43.4MB/s)(83.0MiB/2006msec) 00:27:35.401 slat (nsec): min=1345, max=18550, avg=1506.97, stdev=364.02 00:27:35.401 clat (usec): min=3187, max=10310, avg=5972.02, stdev=185.61 00:27:35.401 lat (usec): min=3190, max=10312, avg=5973.52, stdev=185.58 00:27:35.401 clat percentiles (usec): 00:27:35.401 | 1.00th=[ 5407], 5.00th=[ 5932], 10.00th=[ 5932], 20.00th=[ 5932], 00:27:35.401 | 30.00th=[ 5932], 40.00th=[ 5997], 50.00th=[ 5997], 60.00th=[ 5997], 00:27:35.401 | 70.00th=[ 5997], 80.00th=[ 5997], 90.00th=[ 5997], 95.00th=[ 5997], 00:27:35.401 | 99.00th=[ 6456], 99.50th=[ 6718], 99.90th=[ 8717], 99.95th=[ 9634], 00:27:35.401 | 99.99th=[10290] 00:27:35.401 bw ( KiB/s): min=40736, max=43048, per=99.99%, avg=42346.00, stdev=1095.06, samples=4 00:27:35.401 iops : min=10184, max=10762, avg=10586.50, stdev=273.77, samples=4 00:27:35.401 write: IOPS=10.6k, BW=41.3MiB/s (43.3MB/s)(82.9MiB/2006msec); 0 zone resets 00:27:35.401 slat (nsec): min=1400, max=17781, avg=1626.34, stdev=356.38 00:27:35.401 clat (usec): min=3195, max=10323, avg=5991.58, stdev=189.40 00:27:35.401 lat (usec): min=3200, max=10325, avg=5993.21, stdev=189.38 00:27:35.401 clat percentiles (usec): 00:27:35.401 | 1.00th=[ 5473], 5.00th=[ 5932], 10.00th=[ 5932], 20.00th=[ 5997], 00:27:35.402 | 30.00th=[ 5997], 40.00th=[ 5997], 50.00th=[ 5997], 60.00th=[ 5997], 00:27:35.402 | 70.00th=[ 5997], 80.00th=[ 5997], 90.00th=[ 6063], 95.00th=[ 6063], 00:27:35.402 | 99.00th=[ 6521], 99.50th=[ 6783], 99.90th=[ 8717], 99.95th=[10290], 00:27:35.402 | 99.99th=[10290] 00:27:35.402 bw ( KiB/s): min=41192, max=42936, per=100.00%, avg=42330.00, stdev=775.42, samples=4 00:27:35.402 iops : min=10298, max=10734, avg=10582.50, stdev=193.85, samples=4 00:27:35.402 lat (msec) : 4=0.05%, 10=99.91%, 20=0.04% 00:27:35.402 cpu : usr=99.60%, sys=0.05%, ctx=15, majf=0, minf=11 00:27:35.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:35.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:35.402 issued rwts: total=21238,21227,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:35.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:35.402 00:27:35.402 Run status group 0 (all jobs): 00:27:35.402 READ: bw=41.4MiB/s (43.4MB/s), 41.4MiB/s-41.4MiB/s (43.4MB/s-43.4MB/s), io=83.0MiB (87.0MB), run=2006-2006msec 00:27:35.402 WRITE: bw=41.3MiB/s (43.3MB/s), 41.3MiB/s-41.3MiB/s (43.3MB/s-43.3MB/s), io=82.9MiB (86.9MB), run=2006-2006msec 00:27:35.402 11:51:04 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:35.402 11:51:04 -- host/fio.sh@74 -- # sync 00:27:35.402 11:51:04 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:43.508 11:51:11 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:43.508 11:51:12 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:48.762 11:51:17 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:48.762 11:51:17 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:52.067 11:51:21 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:52.067 11:51:21 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:52.067 11:51:21 -- host/fio.sh@86 -- # nvmftestfini 00:27:52.067 11:51:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:52.067 11:51:21 -- nvmf/common.sh@116 -- # sync 00:27:52.067 11:51:21 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:52.067 11:51:21 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:52.067 11:51:21 -- nvmf/common.sh@119 -- # set +e 00:27:52.067 11:51:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:52.067 11:51:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:52.067 rmmod nvme_rdma 00:27:52.067 rmmod nvme_fabrics 00:27:52.067 11:51:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:52.067 11:51:21 -- nvmf/common.sh@123 -- # set -e 00:27:52.067 11:51:21 -- nvmf/common.sh@124 -- # return 0 00:27:52.067 11:51:21 -- nvmf/common.sh@477 -- # '[' -n 2492098 ']' 00:27:52.067 11:51:21 -- nvmf/common.sh@478 -- # killprocess 2492098 00:27:52.067 11:51:21 -- common/autotest_common.sh@926 -- # '[' -z 2492098 ']' 00:27:52.067 11:51:21 -- common/autotest_common.sh@930 -- # kill -0 2492098 00:27:52.067 11:51:21 -- common/autotest_common.sh@931 -- # uname 00:27:52.067 11:51:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:52.067 11:51:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2492098 00:27:52.067 11:51:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:52.067 11:51:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:52.067 11:51:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2492098' 00:27:52.067 killing process with pid 2492098 00:27:52.067 11:51:21 -- common/autotest_common.sh@945 -- # kill 2492098 00:27:52.067 11:51:21 -- common/autotest_common.sh@950 -- # wait 2492098 00:27:52.067 11:51:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:52.067 11:51:21 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:52.067 00:27:52.067 real 0m51.702s 00:27:52.067 user 3m37.862s 00:27:52.067 sys 0m9.169s 00:27:52.067 11:51:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.067 11:51:21 -- common/autotest_common.sh@10 -- # set +x 00:27:52.067 ************************************ 00:27:52.067 END TEST nvmf_fio_host 00:27:52.067 ************************************ 00:27:52.067 11:51:21 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:52.067 11:51:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:52.067 11:51:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:52.067 11:51:21 -- common/autotest_common.sh@10 -- # set +x 00:27:52.067 ************************************ 00:27:52.067 START TEST nvmf_failover 00:27:52.067 ************************************ 00:27:52.067 11:51:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:52.325 * Looking for test storage... 00:27:52.325 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:52.325 11:51:21 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:52.325 11:51:21 -- nvmf/common.sh@7 -- # uname -s 00:27:52.325 11:51:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.325 11:51:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.325 11:51:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.325 11:51:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.325 11:51:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.325 11:51:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.325 11:51:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.325 11:51:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.325 11:51:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.325 11:51:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.325 11:51:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:52.325 11:51:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:52.325 11:51:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.325 11:51:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.325 11:51:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:52.325 11:51:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:52.325 11:51:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.325 11:51:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.325 11:51:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.325 11:51:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.325 11:51:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.325 11:51:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.325 11:51:21 -- paths/export.sh@5 -- # export PATH 00:27:52.325 11:51:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.325 11:51:21 -- nvmf/common.sh@46 -- # : 0 00:27:52.325 11:51:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:52.325 11:51:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:52.325 11:51:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:52.325 11:51:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.325 11:51:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.325 11:51:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:52.325 11:51:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:52.325 11:51:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:52.325 11:51:21 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:52.325 11:51:21 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:52.325 11:51:21 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:52.325 11:51:21 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:52.325 11:51:21 -- host/failover.sh@18 -- # nvmftestinit 00:27:52.325 11:51:21 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:52.325 11:51:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.325 11:51:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:52.325 11:51:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:52.325 11:51:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:52.325 11:51:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.325 11:51:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:52.325 11:51:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.325 11:51:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:52.325 11:51:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:52.325 11:51:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:52.325 11:51:21 -- common/autotest_common.sh@10 -- # set +x 00:28:00.457 11:51:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:00.457 11:51:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:00.457 11:51:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:00.457 11:51:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:00.457 11:51:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:00.457 11:51:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:00.457 11:51:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:00.457 11:51:29 -- nvmf/common.sh@294 -- # net_devs=() 00:28:00.457 11:51:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:00.457 11:51:29 -- nvmf/common.sh@295 -- # e810=() 00:28:00.457 11:51:29 -- nvmf/common.sh@295 -- # local -ga e810 00:28:00.457 11:51:29 -- nvmf/common.sh@296 -- # x722=() 00:28:00.457 11:51:29 -- nvmf/common.sh@296 -- # local -ga x722 00:28:00.457 11:51:29 -- nvmf/common.sh@297 -- # mlx=() 00:28:00.457 11:51:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:00.457 11:51:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.457 11:51:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.457 11:51:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.457 11:51:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.457 11:51:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.457 11:51:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.457 11:51:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.457 11:51:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.457 11:51:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.457 11:51:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.457 11:51:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.457 11:51:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:00.457 11:51:29 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:00.457 11:51:29 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:00.457 11:51:29 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:00.457 11:51:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:00.457 11:51:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:00.457 11:51:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:00.457 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:00.457 11:51:29 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:00.457 11:51:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:00.457 11:51:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:00.457 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:00.457 11:51:29 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:00.457 11:51:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:00.457 11:51:29 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:00.457 11:51:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.457 11:51:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:00.457 11:51:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.457 11:51:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:00.457 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:00.457 11:51:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.457 11:51:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:00.457 11:51:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.457 11:51:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:00.457 11:51:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.457 11:51:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:00.457 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:00.457 11:51:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.457 11:51:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:00.457 11:51:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:00.457 11:51:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:00.457 11:51:29 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:00.457 11:51:29 -- nvmf/common.sh@57 -- # uname 00:28:00.457 11:51:29 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:00.457 11:51:29 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:00.457 11:51:29 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:00.457 11:51:29 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:00.457 11:51:29 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:00.457 11:51:29 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:00.457 11:51:29 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:00.457 11:51:29 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:00.457 11:51:29 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:00.457 11:51:29 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:00.457 11:51:29 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:00.457 11:51:29 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:00.457 11:51:29 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:00.457 11:51:29 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:00.457 11:51:29 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:00.457 11:51:29 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:00.457 11:51:29 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:00.457 11:51:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.457 11:51:29 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:00.457 11:51:29 -- nvmf/common.sh@104 -- # continue 2 00:28:00.457 11:51:29 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:00.457 11:51:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.457 11:51:29 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.457 11:51:29 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:00.457 11:51:29 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:00.457 11:51:29 -- nvmf/common.sh@104 -- # continue 2 00:28:00.457 11:51:29 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:00.458 11:51:29 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:00.458 11:51:29 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:00.458 11:51:29 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:00.458 11:51:29 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:00.458 11:51:29 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:00.458 11:51:29 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:00.458 11:51:29 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:00.458 11:51:29 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:00.458 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:00.458 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:00.458 altname enp217s0f0np0 00:28:00.458 altname ens818f0np0 00:28:00.458 inet 192.168.100.8/24 scope global mlx_0_0 00:28:00.458 valid_lft forever preferred_lft forever 00:28:00.458 11:51:29 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:00.458 11:51:29 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:00.458 11:51:29 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:00.458 11:51:29 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:00.458 11:51:29 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:00.458 11:51:29 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:00.458 11:51:29 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:00.458 11:51:29 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:00.458 11:51:29 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:00.458 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:00.458 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:00.458 altname enp217s0f1np1 00:28:00.458 altname ens818f1np1 00:28:00.458 inet 192.168.100.9/24 scope global mlx_0_1 00:28:00.458 valid_lft forever preferred_lft forever 00:28:00.458 11:51:29 -- nvmf/common.sh@410 -- # return 0 00:28:00.458 11:51:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:00.458 11:51:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:00.458 11:51:29 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:00.458 11:51:29 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:00.458 11:51:29 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:00.458 11:51:29 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:00.458 11:51:29 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:00.458 11:51:29 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:00.458 11:51:29 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:00.458 11:51:29 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:00.458 11:51:29 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:00.458 11:51:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.458 11:51:29 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:00.458 11:51:29 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:00.458 11:51:29 -- nvmf/common.sh@104 -- # continue 2 00:28:00.458 11:51:29 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:00.458 11:51:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.458 11:51:29 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:00.458 11:51:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.458 11:51:29 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:00.458 11:51:29 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:00.458 11:51:29 -- nvmf/common.sh@104 -- # continue 2 00:28:00.458 11:51:29 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:00.458 11:51:29 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:00.458 11:51:29 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:00.458 11:51:29 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:00.458 11:51:29 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:00.458 11:51:29 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:00.458 11:51:29 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:00.458 11:51:29 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:00.458 11:51:29 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:00.715 11:51:29 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:00.715 11:51:29 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:00.715 11:51:29 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:00.715 11:51:29 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:00.715 192.168.100.9' 00:28:00.715 11:51:29 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:00.715 192.168.100.9' 00:28:00.715 11:51:29 -- nvmf/common.sh@445 -- # head -n 1 00:28:00.715 11:51:29 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:00.715 11:51:29 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:00.715 192.168.100.9' 00:28:00.715 11:51:29 -- nvmf/common.sh@446 -- # tail -n +2 00:28:00.715 11:51:29 -- nvmf/common.sh@446 -- # head -n 1 00:28:00.715 11:51:29 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:00.715 11:51:29 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:00.715 11:51:29 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:00.715 11:51:29 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:00.715 11:51:29 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:00.715 11:51:29 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:00.715 11:51:29 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:00.715 11:51:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:00.715 11:51:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:00.715 11:51:29 -- common/autotest_common.sh@10 -- # set +x 00:28:00.715 11:51:29 -- nvmf/common.sh@469 -- # nvmfpid=2504549 00:28:00.715 11:51:29 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:00.715 11:51:29 -- nvmf/common.sh@470 -- # waitforlisten 2504549 00:28:00.715 11:51:29 -- common/autotest_common.sh@819 -- # '[' -z 2504549 ']' 00:28:00.715 11:51:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.715 11:51:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:00.715 11:51:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.715 11:51:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:00.715 11:51:29 -- common/autotest_common.sh@10 -- # set +x 00:28:00.715 [2024-07-21 11:51:29.993449] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:00.715 [2024-07-21 11:51:29.993503] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.715 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.715 [2024-07-21 11:51:30.082101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:00.715 [2024-07-21 11:51:30.120448] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:00.715 [2024-07-21 11:51:30.120560] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.715 [2024-07-21 11:51:30.120570] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.715 [2024-07-21 11:51:30.120579] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.715 [2024-07-21 11:51:30.120691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:00.715 [2024-07-21 11:51:30.120722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.715 [2024-07-21 11:51:30.120720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:01.647 11:51:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:01.648 11:51:30 -- common/autotest_common.sh@852 -- # return 0 00:28:01.648 11:51:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:01.648 11:51:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:01.648 11:51:30 -- common/autotest_common.sh@10 -- # set +x 00:28:01.648 11:51:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.648 11:51:30 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:01.648 [2024-07-21 11:51:31.021858] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d3dcd0/0x1d421c0) succeed. 00:28:01.648 [2024-07-21 11:51:31.031799] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d3f220/0x1d83850) succeed. 00:28:01.905 11:51:31 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:02.163 Malloc0 00:28:02.163 11:51:31 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:02.163 11:51:31 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:02.422 11:51:31 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:02.680 [2024-07-21 11:51:31.859749] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:02.681 11:51:31 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:02.681 [2024-07-21 11:51:32.032142] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:28:02.681 11:51:32 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:02.961 [2024-07-21 11:51:32.208798] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:28:02.961 11:51:32 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:02.961 11:51:32 -- host/failover.sh@31 -- # bdevperf_pid=2504985 00:28:02.961 11:51:32 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:02.961 11:51:32 -- host/failover.sh@34 -- # waitforlisten 2504985 /var/tmp/bdevperf.sock 00:28:02.961 11:51:32 -- common/autotest_common.sh@819 -- # '[' -z 2504985 ']' 00:28:02.961 11:51:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:02.961 11:51:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:02.962 11:51:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:02.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:02.962 11:51:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:02.962 11:51:32 -- common/autotest_common.sh@10 -- # set +x 00:28:03.895 11:51:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:03.895 11:51:33 -- common/autotest_common.sh@852 -- # return 0 00:28:03.895 11:51:33 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:04.153 NVMe0n1 00:28:04.153 11:51:33 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:04.153 00:28:04.421 11:51:33 -- host/failover.sh@39 -- # run_test_pid=2505258 00:28:04.421 11:51:33 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:04.421 11:51:33 -- host/failover.sh@41 -- # sleep 1 00:28:05.355 11:51:34 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:05.612 11:51:34 -- host/failover.sh@45 -- # sleep 3 00:28:08.896 11:51:37 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:08.896 00:28:08.896 11:51:38 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:08.896 11:51:38 -- host/failover.sh@50 -- # sleep 3 00:28:12.180 11:51:41 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:12.180 [2024-07-21 11:51:41.384249] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:12.180 11:51:41 -- host/failover.sh@55 -- # sleep 1 00:28:13.114 11:51:42 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:13.373 11:51:42 -- host/failover.sh@59 -- # wait 2505258 00:28:19.961 0 00:28:19.961 11:51:48 -- host/failover.sh@61 -- # killprocess 2504985 00:28:19.961 11:51:48 -- common/autotest_common.sh@926 -- # '[' -z 2504985 ']' 00:28:19.961 11:51:48 -- common/autotest_common.sh@930 -- # kill -0 2504985 00:28:19.961 11:51:48 -- common/autotest_common.sh@931 -- # uname 00:28:19.961 11:51:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:19.961 11:51:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2504985 00:28:19.961 11:51:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:19.961 11:51:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:19.961 11:51:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2504985' 00:28:19.961 killing process with pid 2504985 00:28:19.961 11:51:48 -- common/autotest_common.sh@945 -- # kill 2504985 00:28:19.961 11:51:48 -- common/autotest_common.sh@950 -- # wait 2504985 00:28:19.961 11:51:48 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:19.962 [2024-07-21 11:51:32.265161] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:19.962 [2024-07-21 11:51:32.265220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504985 ] 00:28:19.962 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.962 [2024-07-21 11:51:32.353575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.962 [2024-07-21 11:51:32.391216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.962 Running I/O for 15 seconds... 00:28:19.962 [2024-07-21 11:51:35.764650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183f00 00:28:19.962 [2024-07-21 11:51:35.764694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.764715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x180000 00:28:19.962 [2024-07-21 11:51:35.764726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.764739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.962 [2024-07-21 11:51:35.764750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.764761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.962 [2024-07-21 11:51:35.764771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.764782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0x180000 00:28:19.962 [2024-07-21 11:51:35.764792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.764804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183f00 00:28:19.962 [2024-07-21 11:51:35.764814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.764826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0x180000 00:28:19.962 [2024-07-21 11:51:35.764835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.764846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.962 [2024-07-21 11:51:35.764856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.764867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x180000 00:28:19.962 [2024-07-21 11:51:35.764877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.764888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183f00 00:28:19.962 [2024-07-21 11:51:35.764898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.764915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.962 [2024-07-21 11:51:35.764924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.764936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.962 [2024-07-21 11:51:35.764945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.764957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183f00 00:28:19.962 [2024-07-21 11:51:35.764967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.764978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.962 [2024-07-21 11:51:35.764987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.764999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x180000 00:28:19.962 [2024-07-21 11:51:35.765008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.962 [2024-07-21 11:51:35.765029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183f00 00:28:19.962 [2024-07-21 11:51:35.765050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dec80 len:0x1000 key:0x180000 00:28:19.962 [2024-07-21 11:51:35.765071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183f00 00:28:19.962 [2024-07-21 11:51:35.765092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.962 [2024-07-21 11:51:35.765113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.962 [2024-07-21 11:51:35.765134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x180000 00:28:19.962 [2024-07-21 11:51:35.765156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d9a00 len:0x1000 key:0x180000 00:28:19.962 [2024-07-21 11:51:35.765179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.962 [2024-07-21 11:51:35.765200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183f00 00:28:19.962 [2024-07-21 11:51:35.765224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.962 [2024-07-21 11:51:35.765247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183f00 00:28:19.962 [2024-07-21 11:51:35.765270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.962 [2024-07-21 11:51:35.765293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0x180000 00:28:19.962 [2024-07-21 11:51:35.765314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.962 [2024-07-21 11:51:35.765335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183f00 00:28:19.962 [2024-07-21 11:51:35.765357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x180000 00:28:19.962 [2024-07-21 11:51:35.765378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.962 [2024-07-21 11:51:35.765399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183f00 00:28:19.962 [2024-07-21 11:51:35.765422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183f00 00:28:19.962 [2024-07-21 11:51:35.765442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.962 [2024-07-21 11:51:35.765465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.962 [2024-07-21 11:51:35.765486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.962 [2024-07-21 11:51:35.765497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0x180000 00:28:19.962 [2024-07-21 11:51:35.765507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.963 [2024-07-21 11:51:35.765527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x180000 00:28:19.963 [2024-07-21 11:51:35.765547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x180000 00:28:19.963 [2024-07-21 11:51:35.765568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.963 [2024-07-21 11:51:35.765588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x180000 00:28:19.963 [2024-07-21 11:51:35.765609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183f00 00:28:19.963 [2024-07-21 11:51:35.765633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183f00 00:28:19.963 [2024-07-21 11:51:35.765654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x180000 00:28:19.963 [2024-07-21 11:51:35.765676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183f00 00:28:19.963 [2024-07-21 11:51:35.765697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x180000 00:28:19.963 [2024-07-21 11:51:35.765717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.963 [2024-07-21 11:51:35.765738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183f00 00:28:19.963 [2024-07-21 11:51:35.765758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.963 [2024-07-21 11:51:35.765778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x180000 00:28:19.963 [2024-07-21 11:51:35.765799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183f00 00:28:19.963 [2024-07-21 11:51:35.765821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x180000 00:28:19.963 [2024-07-21 11:51:35.765842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x180000 00:28:19.963 [2024-07-21 11:51:35.765862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.963 [2024-07-21 11:51:35.765882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.963 [2024-07-21 11:51:35.765903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b5880 len:0x1000 key:0x180000 00:28:19.963 [2024-07-21 11:51:35.765927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.963 [2024-07-21 11:51:35.765949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x180000 00:28:19.963 [2024-07-21 11:51:35.765970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.765981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183f00 00:28:19.963 [2024-07-21 11:51:35.765990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.766001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x180000 00:28:19.963 [2024-07-21 11:51:35.766011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.766022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x180000 00:28:19.963 [2024-07-21 11:51:35.766031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.766043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.963 [2024-07-21 11:51:35.766053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.766064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183f00 00:28:19.963 [2024-07-21 11:51:35.766074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.766086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.963 [2024-07-21 11:51:35.766095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.766107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.963 [2024-07-21 11:51:35.766116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.766127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183f00 00:28:19.963 [2024-07-21 11:51:35.766137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.766148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.963 [2024-07-21 11:51:35.766157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.766169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x180000 00:28:19.963 [2024-07-21 11:51:35.766179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.766191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183f00 00:28:19.963 [2024-07-21 11:51:35.766203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.766215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x180000 00:28:19.963 [2024-07-21 11:51:35.766227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.766240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.963 [2024-07-21 11:51:35.766251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.766263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x180000 00:28:19.963 [2024-07-21 11:51:35.766273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.766286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183f00 00:28:19.963 [2024-07-21 11:51:35.766296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.766308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x180000 00:28:19.963 [2024-07-21 11:51:35.766318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.766330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.963 [2024-07-21 11:51:35.766340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.963 [2024-07-21 11:51:35.766352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.963 [2024-07-21 11:51:35.766362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x180000 00:28:19.964 [2024-07-21 11:51:35.766386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.964 [2024-07-21 11:51:35.766407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.964 [2024-07-21 11:51:35.766429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183f00 00:28:19.964 [2024-07-21 11:51:35.766453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183f00 00:28:19.964 [2024-07-21 11:51:35.766474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.964 [2024-07-21 11:51:35.766497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x180000 00:28:19.964 [2024-07-21 11:51:35.766519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.964 [2024-07-21 11:51:35.766542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183f00 00:28:19.964 [2024-07-21 11:51:35.766563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013896980 len:0x1000 key:0x180000 00:28:19.964 [2024-07-21 11:51:35.766585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x180000 00:28:19.964 [2024-07-21 11:51:35.766607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.964 [2024-07-21 11:51:35.766632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x180000 00:28:19.964 [2024-07-21 11:51:35.766654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183f00 00:28:19.964 [2024-07-21 11:51:35.766675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.964 [2024-07-21 11:51:35.766700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013890680 len:0x1000 key:0x180000 00:28:19.964 [2024-07-21 11:51:35.766720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183f00 00:28:19.964 [2024-07-21 11:51:35.766741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183f00 00:28:19.964 [2024-07-21 11:51:35.766762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.964 [2024-07-21 11:51:35.766782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388c480 len:0x1000 key:0x180000 00:28:19.964 [2024-07-21 11:51:35.766803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.964 [2024-07-21 11:51:35.766824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183f00 00:28:19.964 [2024-07-21 11:51:35.766845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183f00 00:28:19.964 [2024-07-21 11:51:35.766866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x180000 00:28:19.964 [2024-07-21 11:51:35.766887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183f00 00:28:19.964 [2024-07-21 11:51:35.766907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.964 [2024-07-21 11:51:35.766928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.964 [2024-07-21 11:51:35.766951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x180000 00:28:19.964 [2024-07-21 11:51:35.766971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.766982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.964 [2024-07-21 11:51:35.766992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.767003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x180000 00:28:19.964 [2024-07-21 11:51:35.767013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.767025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x180000 00:28:19.964 [2024-07-21 11:51:35.767035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.767046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x180000 00:28:19.964 [2024-07-21 11:51:35.767056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.767067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183f00 00:28:19.964 [2024-07-21 11:51:35.767077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.767088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x180000 00:28:19.964 [2024-07-21 11:51:35.767098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.767109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387cd00 len:0x1000 key:0x180000 00:28:19.964 [2024-07-21 11:51:35.767119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.767131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x180000 00:28:19.964 [2024-07-21 11:51:35.767140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.767152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x180000 00:28:19.964 [2024-07-21 11:51:35.767163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.767176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183f00 00:28:19.964 [2024-07-21 11:51:35.767188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.964 [2024-07-21 11:51:35.767200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183f00 00:28:19.964 [2024-07-21 11:51:35.767210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:35.767221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.965 [2024-07-21 11:51:35.767232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:35.767244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:35.767254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:35.767266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:35.767277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:35.767289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:35.767299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:35.767311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:35.767321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:35.767333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.965 [2024-07-21 11:51:35.767345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:35.767357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.965 [2024-07-21 11:51:35.767368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:35.767379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.965 [2024-07-21 11:51:35.767390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:35.767402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0x180000 00:28:19.965 [2024-07-21 11:51:35.767412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:35.767424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:35.767436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:35.769269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:19.965 [2024-07-21 11:51:35.769285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:19.965 [2024-07-21 11:51:35.769294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86952 len:8 PRP1 0x0 PRP2 0x0 00:28:19.965 [2024-07-21 11:51:35.769304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:35.769344] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:28:19.965 [2024-07-21 11:51:35.769361] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:28:19.965 [2024-07-21 11:51:35.769372] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.965 [2024-07-21 11:51:35.771214] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.965 [2024-07-21 11:51:35.786149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:19.965 [2024-07-21 11:51:35.814440] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:19.965 [2024-07-21 11:51:39.211182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:39.211227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:39.211257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:39.211279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:39.211300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:39.211322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:39.211343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.965 [2024-07-21 11:51:39.211365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b5880 len:0x1000 key:0x182e00 00:28:19.965 [2024-07-21 11:51:39.211393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:39.211414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.965 [2024-07-21 11:51:39.211434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.965 [2024-07-21 11:51:39.211455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:39.211475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x182e00 00:28:19.965 [2024-07-21 11:51:39.211496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f1580 len:0x1000 key:0x182e00 00:28:19.965 [2024-07-21 11:51:39.211517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.965 [2024-07-21 11:51:39.211537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:39.211557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.965 [2024-07-21 11:51:39.211578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:39.211599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0x182e00 00:28:19.965 [2024-07-21 11:51:39.211619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138eb280 len:0x1000 key:0x182e00 00:28:19.965 [2024-07-21 11:51:39.211647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:39.211668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:39.211688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.965 [2024-07-21 11:51:39.211709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183f00 00:28:19.965 [2024-07-21 11:51:39.211730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.965 [2024-07-21 11:51:39.211750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.965 [2024-07-21 11:51:39.211761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183f00 00:28:19.966 [2024-07-21 11:51:39.211770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.211781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e3f00 len:0x1000 key:0x182e00 00:28:19.966 [2024-07-21 11:51:39.211791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.211802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.966 [2024-07-21 11:51:39.211811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.211822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.966 [2024-07-21 11:51:39.211831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.211843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183f00 00:28:19.966 [2024-07-21 11:51:39.211852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.211864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183f00 00:28:19.966 [2024-07-21 11:51:39.211874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.211886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x182e00 00:28:19.966 [2024-07-21 11:51:39.211895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.211906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.966 [2024-07-21 11:51:39.211916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.211927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.966 [2024-07-21 11:51:39.211936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.211948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x182e00 00:28:19.966 [2024-07-21 11:51:39.211957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.211968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.966 [2024-07-21 11:51:39.211977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.211989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183f00 00:28:19.966 [2024-07-21 11:51:39.211998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.212009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x182e00 00:28:19.966 [2024-07-21 11:51:39.212019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.212030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x182e00 00:28:19.966 [2024-07-21 11:51:39.212040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.212051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183f00 00:28:19.966 [2024-07-21 11:51:39.212060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.212071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183f00 00:28:19.966 [2024-07-21 11:51:39.212081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.212091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.966 [2024-07-21 11:51:39.212101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.212112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.966 [2024-07-21 11:51:39.212123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.212133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a0e80 len:0x1000 key:0x182e00 00:28:19.966 [2024-07-21 11:51:39.212143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.212154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183f00 00:28:19.966 [2024-07-21 11:51:39.212163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.212175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183f00 00:28:19.966 [2024-07-21 11:51:39.212184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.212195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x182e00 00:28:19.966 [2024-07-21 11:51:39.212205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.966 [2024-07-21 11:51:39.212216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183f00 00:28:19.967 [2024-07-21 11:51:39.212226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389bc00 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.212247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.212267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.967 [2024-07-21 11:51:39.212288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.212308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183f00 00:28:19.967 [2024-07-21 11:51:39.212329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183f00 00:28:19.967 [2024-07-21 11:51:39.212349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dfd00 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.212371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183f00 00:28:19.967 [2024-07-21 11:51:39.212392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.967 [2024-07-21 11:51:39.212413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dcb80 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.212433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.212454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183f00 00:28:19.967 [2024-07-21 11:51:39.212474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183f00 00:28:19.967 [2024-07-21 11:51:39.212495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.212515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.967 [2024-07-21 11:51:39.212536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.967 [2024-07-21 11:51:39.212556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.967 [2024-07-21 11:51:39.212578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d4780 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.212598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.212620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.967 [2024-07-21 11:51:39.212644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d1600 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.212665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.967 [2024-07-21 11:51:39.212685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.967 [2024-07-21 11:51:39.212705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f7880 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.212726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.212747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183f00 00:28:19.967 [2024-07-21 11:51:39.212768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f4700 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.212789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183f00 00:28:19.967 [2024-07-21 11:51:39.212809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.212830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013896980 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.212851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.212872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183f00 00:28:19.967 [2024-07-21 11:51:39.212901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.967 [2024-07-21 11:51:39.212921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.212942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183f00 00:28:19.967 [2024-07-21 11:51:39.212962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.967 [2024-07-21 11:51:39.212983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.212994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388f600 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.213004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.213015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183f00 00:28:19.967 [2024-07-21 11:51:39.213024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.967 [2024-07-21 11:51:39.213035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388d500 len:0x1000 key:0x182e00 00:28:19.967 [2024-07-21 11:51:39.213045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388c480 len:0x1000 key:0x182e00 00:28:19.968 [2024-07-21 11:51:39.213066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183f00 00:28:19.968 [2024-07-21 11:51:39.213086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388a380 len:0x1000 key:0x182e00 00:28:19.968 [2024-07-21 11:51:39.213108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013889300 len:0x1000 key:0x182e00 00:28:19.968 [2024-07-21 11:51:39.213128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.968 [2024-07-21 11:51:39.213149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x182e00 00:28:19.968 [2024-07-21 11:51:39.213169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x182e00 00:28:19.968 [2024-07-21 11:51:39.213189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cf500 len:0x1000 key:0x182e00 00:28:19.968 [2024-07-21 11:51:39.213210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.968 [2024-07-21 11:51:39.213231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x182e00 00:28:19.968 [2024-07-21 11:51:39.213251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.968 [2024-07-21 11:51:39.213271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183f00 00:28:19.968 [2024-07-21 11:51:39.213292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183f00 00:28:19.968 [2024-07-21 11:51:39.213312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.968 [2024-07-21 11:51:39.213334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183f00 00:28:19.968 [2024-07-21 11:51:39.213354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183f00 00:28:19.968 [2024-07-21 11:51:39.213375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.968 [2024-07-21 11:51:39.213395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.968 [2024-07-21 11:51:39.213415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x182e00 00:28:19.968 [2024-07-21 11:51:39.213436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c2f00 len:0x1000 key:0x182e00 00:28:19.968 [2024-07-21 11:51:39.213457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x182e00 00:28:19.968 [2024-07-21 11:51:39.213477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x182e00 00:28:19.968 [2024-07-21 11:51:39.213498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183f00 00:28:19.968 [2024-07-21 11:51:39.213518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x182e00 00:28:19.968 [2024-07-21 11:51:39.213538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183f00 00:28:19.968 [2024-07-21 11:51:39.213559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:47032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bcc00 len:0x1000 key:0x182e00 00:28:19.968 [2024-07-21 11:51:39.213581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.968 [2024-07-21 11:51:39.213601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.968 [2024-07-21 11:51:39.213622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183f00 00:28:19.968 [2024-07-21 11:51:39.213645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183f00 00:28:19.968 [2024-07-21 11:51:39.213665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.968 [2024-07-21 11:51:39.213685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x182e00 00:28:19.968 [2024-07-21 11:51:39.213706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183f00 00:28:19.968 [2024-07-21 11:51:39.213726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013883000 len:0x1000 key:0x182e00 00:28:19.968 [2024-07-21 11:51:39.213747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183f00 00:28:19.968 [2024-07-21 11:51:39.213767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183f00 00:28:19.968 [2024-07-21 11:51:39.213790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.968 [2024-07-21 11:51:39.213811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x182e00 00:28:19.968 [2024-07-21 11:51:39.213832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.968 [2024-07-21 11:51:39.213844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183f00 00:28:19.968 [2024-07-21 11:51:39.213853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:39.213864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.969 [2024-07-21 11:51:39.213874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:39.215747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:19.969 [2024-07-21 11:51:39.215761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:19.969 [2024-07-21 11:51:39.215770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47104 len:8 PRP1 0x0 PRP2 0x0 00:28:19.969 [2024-07-21 11:51:39.215780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:39.215820] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:28:19.969 [2024-07-21 11:51:39.215832] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:28:19.969 [2024-07-21 11:51:39.215842] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.969 [2024-07-21 11:51:39.217488] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.969 [2024-07-21 11:51:39.232344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:19.969 [2024-07-21 11:51:39.265446] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:19.969 [2024-07-21 11:51:43.570535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:68520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x180000 00:28:19.969 [2024-07-21 11:51:43.570575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.969 [2024-07-21 11:51:43.570605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.969 [2024-07-21 11:51:43.570629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x180000 00:28:19.969 [2024-07-21 11:51:43.570650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x180000 00:28:19.969 [2024-07-21 11:51:43.570675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183f00 00:28:19.969 [2024-07-21 11:51:43.570696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.969 [2024-07-21 11:51:43.570716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d4780 len:0x1000 key:0x180000 00:28:19.969 [2024-07-21 11:51:43.570737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.969 [2024-07-21 11:51:43.570757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x180000 00:28:19.969 [2024-07-21 11:51:43.570778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.969 [2024-07-21 11:51:43.570798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.969 [2024-07-21 11:51:43.570819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.969 [2024-07-21 11:51:43.570839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.969 [2024-07-21 11:51:43.570859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.969 [2024-07-21 11:51:43.570880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.969 [2024-07-21 11:51:43.570901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183f00 00:28:19.969 [2024-07-21 11:51:43.570923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x180000 00:28:19.969 [2024-07-21 11:51:43.570944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x180000 00:28:19.969 [2024-07-21 11:51:43.570964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.969 [2024-07-21 11:51:43.570985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.570996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x180000 00:28:19.969 [2024-07-21 11:51:43.571005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.571016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183f00 00:28:19.969 [2024-07-21 11:51:43.571025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.571037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183f00 00:28:19.969 [2024-07-21 11:51:43.571046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.571057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.969 [2024-07-21 11:51:43.571066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.571077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x180000 00:28:19.969 [2024-07-21 11:51:43.571086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.571097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183f00 00:28:19.969 [2024-07-21 11:51:43.571107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.571119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183f00 00:28:19.969 [2024-07-21 11:51:43.571128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.571139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183f00 00:28:19.969 [2024-07-21 11:51:43.571148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.571161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183f00 00:28:19.969 [2024-07-21 11:51:43.571170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.571181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.969 [2024-07-21 11:51:43.571192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.571203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.969 [2024-07-21 11:51:43.571212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.571223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x180000 00:28:19.969 [2024-07-21 11:51:43.571232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.571243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183f00 00:28:19.969 [2024-07-21 11:51:43.571253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.571264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183f00 00:28:19.969 [2024-07-21 11:51:43.571274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.571285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x180000 00:28:19.969 [2024-07-21 11:51:43.571295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.969 [2024-07-21 11:51:43.571306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x180000 00:28:19.970 [2024-07-21 11:51:43.571315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.970 [2024-07-21 11:51:43.571335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:68736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x180000 00:28:19.970 [2024-07-21 11:51:43.571356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:68744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x180000 00:28:19.970 [2024-07-21 11:51:43.571376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.970 [2024-07-21 11:51:43.571395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.970 [2024-07-21 11:51:43.571417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013877a80 len:0x1000 key:0x180000 00:28:19.970 [2024-07-21 11:51:43.571437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:68776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x180000 00:28:19.970 [2024-07-21 11:51:43.571458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b7980 len:0x1000 key:0x180000 00:28:19.970 [2024-07-21 11:51:43.571478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.970 [2024-07-21 11:51:43.571498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:68040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183f00 00:28:19.970 [2024-07-21 11:51:43.571518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.970 [2024-07-21 11:51:43.571538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.970 [2024-07-21 11:51:43.571558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183f00 00:28:19.970 [2024-07-21 11:51:43.571578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.970 [2024-07-21 11:51:43.571598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183f00 00:28:19.970 [2024-07-21 11:51:43.571619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.970 [2024-07-21 11:51:43.571649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183f00 00:28:19.970 [2024-07-21 11:51:43.571670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.970 [2024-07-21 11:51:43.571691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.970 [2024-07-21 11:51:43.571711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183f00 00:28:19.970 [2024-07-21 11:51:43.571731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.970 [2024-07-21 11:51:43.571742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183f00 00:28:19.971 [2024-07-21 11:51:43.571751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.971 [2024-07-21 11:51:43.571762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183f00 00:28:19.971 [2024-07-21 11:51:43.571771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.971 [2024-07-21 11:51:43.571782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.971 [2024-07-21 11:51:43.571791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.971 [2024-07-21 11:51:43.571802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.971 [2024-07-21 11:51:43.571811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.971 [2024-07-21 11:51:43.571822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dfd00 len:0x1000 key:0x180000 00:28:19.971 [2024-07-21 11:51:43.571831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.971 [2024-07-21 11:51:43.571842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b5880 len:0x1000 key:0x180000 00:28:19.971 [2024-07-21 11:51:43.571851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.971 [2024-07-21 11:51:43.571862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b4800 len:0x1000 key:0x180000 00:28:19.971 [2024-07-21 11:51:43.571871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.971 [2024-07-21 11:51:43.571882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x180000 00:28:19.971 [2024-07-21 11:51:43.571893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.971 [2024-07-21 11:51:43.571905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.971 [2024-07-21 11:51:43.571914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.971 [2024-07-21 11:51:43.571924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.971 [2024-07-21 11:51:43.571934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.971 [2024-07-21 11:51:43.571944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:68160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183f00 00:28:19.971 [2024-07-21 11:51:43.571954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.971 [2024-07-21 11:51:43.571965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.971 [2024-07-21 11:51:43.571974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.971 [2024-07-21 11:51:43.571985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.971 [2024-07-21 11:51:43.571994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.971 [2024-07-21 11:51:43.572005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0x180000 00:28:19.972 [2024-07-21 11:51:43.572074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.972 [2024-07-21 11:51:43.572094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.972 [2024-07-21 11:51:43.572156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e6000 len:0x1000 key:0x180000 00:28:19.972 [2024-07-21 11:51:43.572197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:68256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.972 [2024-07-21 11:51:43.572258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138af580 len:0x1000 key:0x180000 00:28:19.972 [2024-07-21 11:51:43.572299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.972 [2024-07-21 11:51:43.572319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:68296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x180000 00:28:19.972 [2024-07-21 11:51:43.572361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.972 [2024-07-21 11:51:43.572381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013891700 len:0x1000 key:0x180000 00:28:19.972 [2024-07-21 11:51:43.572401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013890680 len:0x1000 key:0x180000 00:28:19.972 [2024-07-21 11:51:43.572422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.972 [2024-07-21 11:51:43.572442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.972 [2024-07-21 11:51:43.572463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.972 [2024-07-21 11:51:43.572483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:68320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.972 [2024-07-21 11:51:43.572523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dcb80 len:0x1000 key:0x180000 00:28:19.972 [2024-07-21 11:51:43.572543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x180000 00:28:19.972 [2024-07-21 11:51:43.572630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.972 [2024-07-21 11:51:43.572651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:68376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388b400 len:0x1000 key:0x180000 00:28:19.972 [2024-07-21 11:51:43.572691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.972 [2024-07-21 11:51:43.572711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013889300 len:0x1000 key:0x180000 00:28:19.972 [2024-07-21 11:51:43.572731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:68392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:68400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x180000 00:28:19.972 [2024-07-21 11:51:43.572792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:68408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183f00 00:28:19.972 [2024-07-21 11:51:43.572812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.972 [2024-07-21 11:51:43.572823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183f00 00:28:19.973 [2024-07-21 11:51:43.572833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.572845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183f00 00:28:19.973 [2024-07-21 11:51:43.572855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.572865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183f00 00:28:19.973 [2024-07-21 11:51:43.572875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.572885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.973 [2024-07-21 11:51:43.572895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.572906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:69112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x180000 00:28:19.973 [2024-07-21 11:51:43.572915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.572926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x180000 00:28:19.973 [2024-07-21 11:51:43.572935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.572946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183f00 00:28:19.973 [2024-07-21 11:51:43.572955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.572966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.973 [2024-07-21 11:51:43.572975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.572985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.973 [2024-07-21 11:51:43.572995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.573005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183f00 00:28:19.973 [2024-07-21 11:51:43.573015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.573025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.973 [2024-07-21 11:51:43.573035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.573046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183f00 00:28:19.973 [2024-07-21 11:51:43.573055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.573066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.973 [2024-07-21 11:51:43.573076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.573087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183f00 00:28:19.973 [2024-07-21 11:51:43.573097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.573107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183f00 00:28:19.973 [2024-07-21 11:51:43.573119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.573131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183f00 00:28:19.973 [2024-07-21 11:51:43.573140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.573151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.973 [2024-07-21 11:51:43.573160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.573171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183f00 00:28:19.973 [2024-07-21 11:51:43.573180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d399f000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.575127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:19.973 [2024-07-21 11:51:43.575140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:19.973 [2024-07-21 11:51:43.575149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68512 len:8 PRP1 0x0 PRP2 0x0 00:28:19.973 [2024-07-21 11:51:43.575159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.973 [2024-07-21 11:51:43.575198] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:28:19.973 [2024-07-21 11:51:43.575209] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:28:19.973 [2024-07-21 11:51:43.575219] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.973 [2024-07-21 11:51:43.577129] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.973 [2024-07-21 11:51:43.591582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:19.973 [2024-07-21 11:51:43.627618] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:19.973 00:28:19.973 Latency(us) 00:28:19.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.973 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:19.973 Verification LBA range: start 0x0 length 0x4000 00:28:19.973 NVMe0n1 : 15.00 19912.45 77.78 293.05 0.00 6323.11 439.09 1020054.73 00:28:19.973 =================================================================================================================== 00:28:19.973 Total : 19912.45 77.78 293.05 0.00 6323.11 439.09 1020054.73 00:28:19.973 Received shutdown signal, test time was about 15.000000 seconds 00:28:19.973 00:28:19.973 Latency(us) 00:28:19.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.973 =================================================================================================================== 00:28:19.973 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.973 11:51:48 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:19.973 11:51:48 -- host/failover.sh@65 -- # count=3 00:28:19.973 11:51:48 -- host/failover.sh@67 -- # (( count != 3 )) 00:28:19.973 11:51:48 -- host/failover.sh@73 -- # bdevperf_pid=2507736 00:28:19.973 11:51:48 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:19.973 11:51:48 -- host/failover.sh@75 -- # waitforlisten 2507736 /var/tmp/bdevperf.sock 00:28:19.973 11:51:48 -- common/autotest_common.sh@819 -- # '[' -z 2507736 ']' 00:28:19.973 11:51:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:19.973 11:51:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:19.973 11:51:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:19.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:19.973 11:51:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:19.973 11:51:48 -- common/autotest_common.sh@10 -- # set +x 00:28:20.540 11:51:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:20.540 11:51:49 -- common/autotest_common.sh@852 -- # return 0 00:28:20.540 11:51:49 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:20.848 [2024-07-21 11:51:49.978378] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:28:20.848 11:51:50 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:20.848 [2024-07-21 11:51:50.151021] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:28:20.848 11:51:50 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:21.131 NVMe0n1 00:28:21.131 11:51:50 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:21.389 00:28:21.389 11:51:50 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:21.647 00:28:21.647 11:51:50 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:21.647 11:51:50 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:21.905 11:51:51 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:21.905 11:51:51 -- host/failover.sh@87 -- # sleep 3 00:28:25.190 11:51:54 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:25.190 11:51:54 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:25.190 11:51:54 -- host/failover.sh@90 -- # run_test_pid=2508785 00:28:25.190 11:51:54 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:25.190 11:51:54 -- host/failover.sh@92 -- # wait 2508785 00:28:26.123 0 00:28:26.382 11:51:55 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:26.382 [2024-07-21 11:51:49.040744] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:26.382 [2024-07-21 11:51:49.040804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507736 ] 00:28:26.382 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.382 [2024-07-21 11:51:49.130010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.382 [2024-07-21 11:51:49.163424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.382 [2024-07-21 11:51:51.228668] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:28:26.382 [2024-07-21 11:51:51.229309] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.382 [2024-07-21 11:51:51.229335] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.382 [2024-07-21 11:51:51.248747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:26.382 [2024-07-21 11:51:51.264729] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:26.382 Running I/O for 1 seconds... 00:28:26.382 00:28:26.382 Latency(us) 00:28:26.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.382 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:26.382 Verification LBA range: start 0x0 length 0x4000 00:28:26.382 NVMe0n1 : 1.00 24812.20 96.92 0.00 0.00 5134.74 1009.25 17196.65 00:28:26.382 =================================================================================================================== 00:28:26.382 Total : 24812.20 96.92 0.00 0.00 5134.74 1009.25 17196.65 00:28:26.382 11:51:55 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:26.382 11:51:55 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:26.382 11:51:55 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:26.641 11:51:55 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:26.641 11:51:55 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:26.902 11:51:56 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:26.902 11:51:56 -- host/failover.sh@101 -- # sleep 3 00:28:30.189 11:51:59 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:30.189 11:51:59 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:30.189 11:51:59 -- host/failover.sh@108 -- # killprocess 2507736 00:28:30.189 11:51:59 -- common/autotest_common.sh@926 -- # '[' -z 2507736 ']' 00:28:30.189 11:51:59 -- common/autotest_common.sh@930 -- # kill -0 2507736 00:28:30.189 11:51:59 -- common/autotest_common.sh@931 -- # uname 00:28:30.189 11:51:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:30.189 11:51:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2507736 00:28:30.189 11:51:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:30.189 11:51:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:30.189 11:51:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2507736' 00:28:30.189 killing process with pid 2507736 00:28:30.189 11:51:59 -- common/autotest_common.sh@945 -- # kill 2507736 00:28:30.189 11:51:59 -- common/autotest_common.sh@950 -- # wait 2507736 00:28:30.447 11:51:59 -- host/failover.sh@110 -- # sync 00:28:30.447 11:51:59 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:30.447 11:51:59 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:30.447 11:51:59 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:30.447 11:51:59 -- host/failover.sh@116 -- # nvmftestfini 00:28:30.447 11:51:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:30.447 11:51:59 -- nvmf/common.sh@116 -- # sync 00:28:30.447 11:51:59 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:30.447 11:51:59 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:30.447 11:51:59 -- nvmf/common.sh@119 -- # set +e 00:28:30.447 11:51:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:30.447 11:51:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:30.706 rmmod nvme_rdma 00:28:30.706 rmmod nvme_fabrics 00:28:30.706 11:51:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:30.706 11:51:59 -- nvmf/common.sh@123 -- # set -e 00:28:30.706 11:51:59 -- nvmf/common.sh@124 -- # return 0 00:28:30.706 11:51:59 -- nvmf/common.sh@477 -- # '[' -n 2504549 ']' 00:28:30.706 11:51:59 -- nvmf/common.sh@478 -- # killprocess 2504549 00:28:30.706 11:51:59 -- common/autotest_common.sh@926 -- # '[' -z 2504549 ']' 00:28:30.706 11:51:59 -- common/autotest_common.sh@930 -- # kill -0 2504549 00:28:30.706 11:51:59 -- common/autotest_common.sh@931 -- # uname 00:28:30.706 11:51:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:30.706 11:51:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2504549 00:28:30.706 11:51:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:30.706 11:51:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:30.706 11:51:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2504549' 00:28:30.706 killing process with pid 2504549 00:28:30.706 11:51:59 -- common/autotest_common.sh@945 -- # kill 2504549 00:28:30.706 11:51:59 -- common/autotest_common.sh@950 -- # wait 2504549 00:28:30.965 11:52:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:30.965 11:52:00 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:30.965 00:28:30.965 real 0m38.756s 00:28:30.965 user 2m3.355s 00:28:30.965 sys 0m8.874s 00:28:30.965 11:52:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:30.965 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:28:30.965 ************************************ 00:28:30.965 END TEST nvmf_failover 00:28:30.965 ************************************ 00:28:30.965 11:52:00 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:28:30.965 11:52:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:30.965 11:52:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:30.965 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:28:30.965 ************************************ 00:28:30.965 START TEST nvmf_discovery 00:28:30.965 ************************************ 00:28:30.965 11:52:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:28:30.965 * Looking for test storage... 00:28:30.965 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:30.965 11:52:00 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.965 11:52:00 -- nvmf/common.sh@7 -- # uname -s 00:28:30.965 11:52:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.965 11:52:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.965 11:52:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.965 11:52:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.965 11:52:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.965 11:52:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.965 11:52:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.965 11:52:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.965 11:52:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.965 11:52:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.223 11:52:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:31.223 11:52:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:31.223 11:52:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.223 11:52:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.223 11:52:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.223 11:52:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:31.223 11:52:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.223 11:52:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.223 11:52:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.223 11:52:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.224 11:52:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.224 11:52:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.224 11:52:00 -- paths/export.sh@5 -- # export PATH 00:28:31.224 11:52:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.224 11:52:00 -- nvmf/common.sh@46 -- # : 0 00:28:31.224 11:52:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:31.224 11:52:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:31.224 11:52:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:31.224 11:52:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.224 11:52:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.224 11:52:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:31.224 11:52:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:31.224 11:52:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:31.224 11:52:00 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:28:31.224 11:52:00 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:31.224 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:31.224 11:52:00 -- host/discovery.sh@13 -- # exit 0 00:28:31.224 00:28:31.224 real 0m0.126s 00:28:31.224 user 0m0.061s 00:28:31.224 sys 0m0.076s 00:28:31.224 11:52:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:31.224 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:28:31.224 ************************************ 00:28:31.224 END TEST nvmf_discovery 00:28:31.224 ************************************ 00:28:31.224 11:52:00 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:31.224 11:52:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:31.224 11:52:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:31.224 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:28:31.224 ************************************ 00:28:31.224 START TEST nvmf_discovery_remove_ifc 00:28:31.224 ************************************ 00:28:31.224 11:52:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:31.224 * Looking for test storage... 00:28:31.224 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:31.224 11:52:00 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.224 11:52:00 -- nvmf/common.sh@7 -- # uname -s 00:28:31.224 11:52:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.224 11:52:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.224 11:52:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.224 11:52:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.224 11:52:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.224 11:52:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.224 11:52:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.224 11:52:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.224 11:52:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.224 11:52:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.224 11:52:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:31.224 11:52:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:31.224 11:52:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.224 11:52:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.224 11:52:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.224 11:52:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:31.224 11:52:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.224 11:52:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.224 11:52:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.224 11:52:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.224 11:52:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.224 11:52:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.224 11:52:00 -- paths/export.sh@5 -- # export PATH 00:28:31.224 11:52:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.224 11:52:00 -- nvmf/common.sh@46 -- # : 0 00:28:31.224 11:52:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:31.224 11:52:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:31.224 11:52:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:31.224 11:52:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.224 11:52:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.224 11:52:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:31.224 11:52:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:31.224 11:52:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:31.224 11:52:00 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:28:31.224 11:52:00 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:31.224 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:31.224 11:52:00 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:28:31.224 00:28:31.224 real 0m0.134s 00:28:31.224 user 0m0.063s 00:28:31.224 sys 0m0.078s 00:28:31.224 11:52:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:31.224 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:28:31.224 ************************************ 00:28:31.224 END TEST nvmf_discovery_remove_ifc 00:28:31.224 ************************************ 00:28:31.224 11:52:00 -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:28:31.224 11:52:00 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:28:31.224 11:52:00 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:28:31.224 11:52:00 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:28:31.224 11:52:00 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:28:31.224 11:52:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:31.224 11:52:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:31.224 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:28:31.224 ************************************ 00:28:31.224 START TEST nvmf_bdevperf 00:28:31.224 ************************************ 00:28:31.224 11:52:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:28:31.483 * Looking for test storage... 00:28:31.483 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:31.483 11:52:00 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.483 11:52:00 -- nvmf/common.sh@7 -- # uname -s 00:28:31.483 11:52:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.483 11:52:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.483 11:52:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.483 11:52:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.483 11:52:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.483 11:52:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.483 11:52:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.483 11:52:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.483 11:52:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.483 11:52:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.483 11:52:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:31.483 11:52:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:31.483 11:52:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.483 11:52:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.483 11:52:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.483 11:52:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:31.483 11:52:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.483 11:52:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.483 11:52:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.483 11:52:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.483 11:52:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.483 11:52:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.483 11:52:00 -- paths/export.sh@5 -- # export PATH 00:28:31.483 11:52:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.483 11:52:00 -- nvmf/common.sh@46 -- # : 0 00:28:31.483 11:52:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:31.483 11:52:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:31.483 11:52:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:31.483 11:52:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.483 11:52:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.483 11:52:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:31.483 11:52:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:31.483 11:52:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:31.483 11:52:00 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:31.483 11:52:00 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:31.483 11:52:00 -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:31.483 11:52:00 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:31.483 11:52:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.483 11:52:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:31.483 11:52:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:31.483 11:52:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:31.483 11:52:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.483 11:52:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.483 11:52:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.483 11:52:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:31.483 11:52:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:31.483 11:52:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:31.483 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:28:39.588 11:52:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:39.588 11:52:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:39.588 11:52:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:39.588 11:52:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:39.588 11:52:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:39.588 11:52:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:39.588 11:52:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:39.588 11:52:08 -- nvmf/common.sh@294 -- # net_devs=() 00:28:39.588 11:52:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:39.588 11:52:08 -- nvmf/common.sh@295 -- # e810=() 00:28:39.588 11:52:08 -- nvmf/common.sh@295 -- # local -ga e810 00:28:39.588 11:52:08 -- nvmf/common.sh@296 -- # x722=() 00:28:39.588 11:52:08 -- nvmf/common.sh@296 -- # local -ga x722 00:28:39.588 11:52:08 -- nvmf/common.sh@297 -- # mlx=() 00:28:39.588 11:52:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:39.588 11:52:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.588 11:52:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.588 11:52:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.588 11:52:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.588 11:52:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.588 11:52:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.588 11:52:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.588 11:52:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.588 11:52:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.588 11:52:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.588 11:52:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.588 11:52:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:39.588 11:52:08 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:39.588 11:52:08 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:39.588 11:52:08 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:39.588 11:52:08 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:39.588 11:52:08 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:39.588 11:52:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:39.588 11:52:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:39.588 11:52:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:39.588 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:39.588 11:52:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:39.588 11:52:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:39.588 11:52:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:39.588 11:52:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:39.588 11:52:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:39.589 11:52:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:39.589 11:52:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:39.589 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:39.589 11:52:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:39.589 11:52:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:39.589 11:52:08 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:39.589 11:52:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.589 11:52:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:39.589 11:52:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.589 11:52:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:39.589 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:39.589 11:52:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.589 11:52:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:39.589 11:52:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.589 11:52:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:39.589 11:52:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.589 11:52:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:39.589 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:39.589 11:52:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.589 11:52:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:39.589 11:52:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:39.589 11:52:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:39.589 11:52:08 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:39.589 11:52:08 -- nvmf/common.sh@57 -- # uname 00:28:39.589 11:52:08 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:39.589 11:52:08 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:39.589 11:52:08 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:39.589 11:52:08 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:39.589 11:52:08 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:39.589 11:52:08 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:39.589 11:52:08 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:39.589 11:52:08 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:39.589 11:52:08 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:39.589 11:52:08 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:39.589 11:52:08 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:39.589 11:52:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:39.589 11:52:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:39.589 11:52:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:39.589 11:52:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:39.589 11:52:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:39.589 11:52:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:39.589 11:52:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:39.589 11:52:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:39.589 11:52:08 -- nvmf/common.sh@104 -- # continue 2 00:28:39.589 11:52:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:39.589 11:52:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:39.589 11:52:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:39.589 11:52:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:39.589 11:52:08 -- nvmf/common.sh@104 -- # continue 2 00:28:39.589 11:52:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:39.589 11:52:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:39.589 11:52:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:39.589 11:52:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:39.589 11:52:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:39.589 11:52:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:39.589 11:52:08 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:39.589 11:52:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:39.589 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:39.589 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:39.589 altname enp217s0f0np0 00:28:39.589 altname ens818f0np0 00:28:39.589 inet 192.168.100.8/24 scope global mlx_0_0 00:28:39.589 valid_lft forever preferred_lft forever 00:28:39.589 11:52:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:39.589 11:52:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:39.589 11:52:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:39.589 11:52:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:39.589 11:52:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:39.589 11:52:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:39.589 11:52:08 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:39.589 11:52:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:39.589 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:39.589 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:39.589 altname enp217s0f1np1 00:28:39.589 altname ens818f1np1 00:28:39.589 inet 192.168.100.9/24 scope global mlx_0_1 00:28:39.589 valid_lft forever preferred_lft forever 00:28:39.589 11:52:08 -- nvmf/common.sh@410 -- # return 0 00:28:39.589 11:52:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:39.589 11:52:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:39.589 11:52:08 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:39.589 11:52:08 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:39.589 11:52:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:39.589 11:52:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:39.589 11:52:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:39.589 11:52:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:39.589 11:52:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:39.589 11:52:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:39.589 11:52:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:39.589 11:52:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:39.589 11:52:08 -- nvmf/common.sh@104 -- # continue 2 00:28:39.589 11:52:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:39.589 11:52:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:39.589 11:52:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:39.589 11:52:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:39.589 11:52:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:39.589 11:52:08 -- nvmf/common.sh@104 -- # continue 2 00:28:39.589 11:52:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:39.589 11:52:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:39.589 11:52:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:39.589 11:52:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:39.589 11:52:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:39.589 11:52:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:39.589 11:52:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:39.589 11:52:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:39.589 11:52:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:39.589 11:52:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:39.589 11:52:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:39.589 11:52:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:39.589 11:52:08 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:39.589 192.168.100.9' 00:28:39.589 11:52:08 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:39.589 192.168.100.9' 00:28:39.589 11:52:08 -- nvmf/common.sh@445 -- # head -n 1 00:28:39.589 11:52:08 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:39.589 11:52:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:39.589 192.168.100.9' 00:28:39.589 11:52:08 -- nvmf/common.sh@446 -- # tail -n +2 00:28:39.589 11:52:08 -- nvmf/common.sh@446 -- # head -n 1 00:28:39.589 11:52:08 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:39.589 11:52:08 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:39.589 11:52:08 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:39.589 11:52:08 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:39.589 11:52:08 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:39.589 11:52:08 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:39.589 11:52:08 -- host/bdevperf.sh@25 -- # tgt_init 00:28:39.589 11:52:08 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:39.589 11:52:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:39.589 11:52:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:39.589 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:28:39.589 11:52:08 -- nvmf/common.sh@469 -- # nvmfpid=2513852 00:28:39.589 11:52:08 -- nvmf/common.sh@470 -- # waitforlisten 2513852 00:28:39.589 11:52:08 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:39.589 11:52:08 -- common/autotest_common.sh@819 -- # '[' -z 2513852 ']' 00:28:39.589 11:52:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.589 11:52:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:39.589 11:52:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.589 11:52:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:39.589 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:28:39.589 [2024-07-21 11:52:08.932401] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:39.589 [2024-07-21 11:52:08.932452] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.589 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.848 [2024-07-21 11:52:09.017168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:39.848 [2024-07-21 11:52:09.053438] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:39.848 [2024-07-21 11:52:09.053567] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.848 [2024-07-21 11:52:09.053577] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.848 [2024-07-21 11:52:09.053586] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.848 [2024-07-21 11:52:09.053687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.848 [2024-07-21 11:52:09.053714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:39.848 [2024-07-21 11:52:09.053716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.416 11:52:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:40.416 11:52:09 -- common/autotest_common.sh@852 -- # return 0 00:28:40.416 11:52:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:40.416 11:52:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:40.416 11:52:09 -- common/autotest_common.sh@10 -- # set +x 00:28:40.416 11:52:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.416 11:52:09 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:40.416 11:52:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.416 11:52:09 -- common/autotest_common.sh@10 -- # set +x 00:28:40.416 [2024-07-21 11:52:09.803869] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1dbfcd0/0x1dc41c0) succeed. 00:28:40.416 [2024-07-21 11:52:09.815387] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1dc1220/0x1e05850) succeed. 00:28:40.675 11:52:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.675 11:52:09 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:40.675 11:52:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.675 11:52:09 -- common/autotest_common.sh@10 -- # set +x 00:28:40.675 Malloc0 00:28:40.675 11:52:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.675 11:52:09 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:40.675 11:52:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.675 11:52:09 -- common/autotest_common.sh@10 -- # set +x 00:28:40.675 11:52:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.675 11:52:09 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:40.675 11:52:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.675 11:52:09 -- common/autotest_common.sh@10 -- # set +x 00:28:40.675 11:52:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.675 11:52:09 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:40.675 11:52:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.675 11:52:09 -- common/autotest_common.sh@10 -- # set +x 00:28:40.675 [2024-07-21 11:52:09.957363] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:40.675 11:52:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.675 11:52:09 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:40.675 11:52:09 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:40.675 11:52:09 -- nvmf/common.sh@520 -- # config=() 00:28:40.675 11:52:09 -- nvmf/common.sh@520 -- # local subsystem config 00:28:40.675 11:52:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:40.675 11:52:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:40.675 { 00:28:40.675 "params": { 00:28:40.675 "name": "Nvme$subsystem", 00:28:40.675 "trtype": "$TEST_TRANSPORT", 00:28:40.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.675 "adrfam": "ipv4", 00:28:40.675 "trsvcid": "$NVMF_PORT", 00:28:40.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.675 "hdgst": ${hdgst:-false}, 00:28:40.675 "ddgst": ${ddgst:-false} 00:28:40.675 }, 00:28:40.675 "method": "bdev_nvme_attach_controller" 00:28:40.675 } 00:28:40.675 EOF 00:28:40.675 )") 00:28:40.675 11:52:09 -- nvmf/common.sh@542 -- # cat 00:28:40.675 11:52:09 -- nvmf/common.sh@544 -- # jq . 00:28:40.675 11:52:09 -- nvmf/common.sh@545 -- # IFS=, 00:28:40.675 11:52:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:40.675 "params": { 00:28:40.675 "name": "Nvme1", 00:28:40.675 "trtype": "rdma", 00:28:40.675 "traddr": "192.168.100.8", 00:28:40.675 "adrfam": "ipv4", 00:28:40.675 "trsvcid": "4420", 00:28:40.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:40.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:40.675 "hdgst": false, 00:28:40.675 "ddgst": false 00:28:40.675 }, 00:28:40.675 "method": "bdev_nvme_attach_controller" 00:28:40.675 }' 00:28:40.675 [2024-07-21 11:52:10.007883] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:40.675 [2024-07-21 11:52:10.007931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2513915 ] 00:28:40.675 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.933 [2024-07-21 11:52:10.100446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.933 [2024-07-21 11:52:10.137759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.933 Running I/O for 1 seconds... 00:28:42.305 00:28:42.305 Latency(us) 00:28:42.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.305 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:42.305 Verification LBA range: start 0x0 length 0x4000 00:28:42.305 Nvme1n1 : 1.00 25137.74 98.19 0.00 0.00 5068.16 1205.86 12163.48 00:28:42.305 =================================================================================================================== 00:28:42.305 Total : 25137.74 98.19 0.00 0.00 5068.16 1205.86 12163.48 00:28:42.305 11:52:11 -- host/bdevperf.sh@30 -- # bdevperfpid=2514183 00:28:42.305 11:52:11 -- host/bdevperf.sh@32 -- # sleep 3 00:28:42.305 11:52:11 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:42.305 11:52:11 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:42.305 11:52:11 -- nvmf/common.sh@520 -- # config=() 00:28:42.305 11:52:11 -- nvmf/common.sh@520 -- # local subsystem config 00:28:42.305 11:52:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:42.305 11:52:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:42.305 { 00:28:42.305 "params": { 00:28:42.305 "name": "Nvme$subsystem", 00:28:42.305 "trtype": "$TEST_TRANSPORT", 00:28:42.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.305 "adrfam": "ipv4", 00:28:42.305 "trsvcid": "$NVMF_PORT", 00:28:42.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.305 "hdgst": ${hdgst:-false}, 00:28:42.305 "ddgst": ${ddgst:-false} 00:28:42.305 }, 00:28:42.305 "method": "bdev_nvme_attach_controller" 00:28:42.305 } 00:28:42.305 EOF 00:28:42.305 )") 00:28:42.305 11:52:11 -- nvmf/common.sh@542 -- # cat 00:28:42.305 11:52:11 -- nvmf/common.sh@544 -- # jq . 00:28:42.305 11:52:11 -- nvmf/common.sh@545 -- # IFS=, 00:28:42.305 11:52:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:42.305 "params": { 00:28:42.305 "name": "Nvme1", 00:28:42.305 "trtype": "rdma", 00:28:42.305 "traddr": "192.168.100.8", 00:28:42.305 "adrfam": "ipv4", 00:28:42.305 "trsvcid": "4420", 00:28:42.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:42.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:42.305 "hdgst": false, 00:28:42.305 "ddgst": false 00:28:42.305 }, 00:28:42.305 "method": "bdev_nvme_attach_controller" 00:28:42.305 }' 00:28:42.305 [2024-07-21 11:52:11.557760] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:42.305 [2024-07-21 11:52:11.557818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2514183 ] 00:28:42.305 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.305 [2024-07-21 11:52:11.646498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.305 [2024-07-21 11:52:11.680650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.562 Running I/O for 15 seconds... 00:28:45.155 11:52:14 -- host/bdevperf.sh@33 -- # kill -9 2513852 00:28:45.155 11:52:14 -- host/bdevperf.sh@35 -- # sleep 3 00:28:46.534 [2024-07-21 11:52:15.539501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.534 [2024-07-21 11:52:15.539538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b2700 len:0x1000 key:0x182e00 00:28:46.534 [2024-07-21 11:52:15.539567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183f00 00:28:46.534 [2024-07-21 11:52:15.539588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x182e00 00:28:46.534 [2024-07-21 11:52:15.539607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.534 [2024-07-21 11:52:15.539629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.534 [2024-07-21 11:52:15.539647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x182e00 00:28:46.534 [2024-07-21 11:52:15.539682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183f00 00:28:46.534 [2024-07-21 11:52:15.539702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183f00 00:28:46.534 [2024-07-21 11:52:15.539721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183f00 00:28:46.534 [2024-07-21 11:52:15.539745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x182e00 00:28:46.534 [2024-07-21 11:52:15.539764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x182e00 00:28:46.534 [2024-07-21 11:52:15.539783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x182e00 00:28:46.534 [2024-07-21 11:52:15.539803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x182e00 00:28:46.534 [2024-07-21 11:52:15.539822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183f00 00:28:46.534 [2024-07-21 11:52:15.539842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183f00 00:28:46.534 [2024-07-21 11:52:15.539861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x182e00 00:28:46.534 [2024-07-21 11:52:15.539880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183f00 00:28:46.534 [2024-07-21 11:52:15.539900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.534 [2024-07-21 11:52:15.539919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.534 [2024-07-21 11:52:15.539938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ed80 len:0x1000 key:0x182e00 00:28:46.534 [2024-07-21 11:52:15.539959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.534 [2024-07-21 11:52:15.539978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.539988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183f00 00:28:46.534 [2024-07-21 11:52:15.539997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.540007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183f00 00:28:46.534 [2024-07-21 11:52:15.540016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.540026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x182e00 00:28:46.534 [2024-07-21 11:52:15.540035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.540045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183f00 00:28:46.534 [2024-07-21 11:52:15.540054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.540064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.534 [2024-07-21 11:52:15.540073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.540083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x182e00 00:28:46.534 [2024-07-21 11:52:15.540092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.540102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013896980 len:0x1000 key:0x182e00 00:28:46.534 [2024-07-21 11:52:15.540111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.540121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183f00 00:28:46.534 [2024-07-21 11:52:15.540130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.540141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183f00 00:28:46.534 [2024-07-21 11:52:15.540149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.540159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183f00 00:28:46.534 [2024-07-21 11:52:15.540168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.540180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x182e00 00:28:46.534 [2024-07-21 11:52:15.540189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.540199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013891700 len:0x1000 key:0x182e00 00:28:46.534 [2024-07-21 11:52:15.540208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.540218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.534 [2024-07-21 11:52:15.540227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.534 [2024-07-21 11:52:15.540237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388f600 len:0x1000 key:0x182e00 00:28:46.535 [2024-07-21 11:52:15.540246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183f00 00:28:46.535 [2024-07-21 11:52:15.540303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183f00 00:28:46.535 [2024-07-21 11:52:15.540321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388a380 len:0x1000 key:0x182e00 00:28:46.535 [2024-07-21 11:52:15.540341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x182e00 00:28:46.535 [2024-07-21 11:52:15.540378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183f00 00:28:46.535 [2024-07-21 11:52:15.540398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x182e00 00:28:46.535 [2024-07-21 11:52:15.540436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183f00 00:28:46.535 [2024-07-21 11:52:15.540474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x182e00 00:28:46.535 [2024-07-21 11:52:15.540512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183f00 00:28:46.535 [2024-07-21 11:52:15.540531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x182e00 00:28:46.535 [2024-07-21 11:52:15.540550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183f00 00:28:46.535 [2024-07-21 11:52:15.540588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183f00 00:28:46.535 [2024-07-21 11:52:15.540607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x182e00 00:28:46.535 [2024-07-21 11:52:15.540630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f7880 len:0x1000 key:0x182e00 00:28:46.535 [2024-07-21 11:52:15.540763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183f00 00:28:46.535 [2024-07-21 11:52:15.540801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183f00 00:28:46.535 [2024-07-21 11:52:15.540859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f1580 len:0x1000 key:0x182e00 00:28:46.535 [2024-07-21 11:52:15.540878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.535 [2024-07-21 11:52:15.540915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ee400 len:0x1000 key:0x182e00 00:28:46.535 [2024-07-21 11:52:15.540934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183f00 00:28:46.535 [2024-07-21 11:52:15.540953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183f00 00:28:46.535 [2024-07-21 11:52:15.540972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.540985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183f00 00:28:46.535 [2024-07-21 11:52:15.540994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.535 [2024-07-21 11:52:15.541005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183f00 00:28:46.536 [2024-07-21 11:52:15.541014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x182e00 00:28:46.536 [2024-07-21 11:52:15.541033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x182e00 00:28:46.536 [2024-07-21 11:52:15.541052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.536 [2024-07-21 11:52:15.541072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.536 [2024-07-21 11:52:15.541092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183f00 00:28:46.536 [2024-07-21 11:52:15.541111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.536 [2024-07-21 11:52:15.541130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.536 [2024-07-21 11:52:15.541149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x182e00 00:28:46.536 [2024-07-21 11:52:15.541169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e0d80 len:0x1000 key:0x182e00 00:28:46.536 [2024-07-21 11:52:15.541188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.536 [2024-07-21 11:52:15.541206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.536 [2024-07-21 11:52:15.541225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ddc00 len:0x1000 key:0x182e00 00:28:46.536 [2024-07-21 11:52:15.541245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.536 [2024-07-21 11:52:15.541264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.536 [2024-07-21 11:52:15.541283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183f00 00:28:46.536 [2024-07-21 11:52:15.541302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183f00 00:28:46.536 [2024-07-21 11:52:15.541322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183f00 00:28:46.536 [2024-07-21 11:52:15.541341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183f00 00:28:46.536 [2024-07-21 11:52:15.541360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d6880 len:0x1000 key:0x182e00 00:28:46.536 [2024-07-21 11:52:15.541379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d5800 len:0x1000 key:0x182e00 00:28:46.536 [2024-07-21 11:52:15.541399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183f00 00:28:46.536 [2024-07-21 11:52:15.541418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0x182e00 00:28:46.536 [2024-07-21 11:52:15.541437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x182e00 00:28:46.536 [2024-07-21 11:52:15.541456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.536 [2024-07-21 11:52:15.541476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183f00 00:28:46.536 [2024-07-21 11:52:15.541496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183f00 00:28:46.536 [2024-07-21 11:52:15.541515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.536 [2024-07-21 11:52:15.541535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x182e00 00:28:46.536 [2024-07-21 11:52:15.541554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183f00 00:28:46.536 [2024-07-21 11:52:15.541574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183f00 00:28:46.536 [2024-07-21 11:52:15.541594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0x182e00 00:28:46.536 [2024-07-21 11:52:15.541613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183f00 00:28:46.536 [2024-07-21 11:52:15.541636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.536 [2024-07-21 11:52:15.541655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x182e00 00:28:46.536 [2024-07-21 11:52:15.541674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c6080 len:0x1000 key:0x182e00 00:28:46.536 [2024-07-21 11:52:15.541693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183f00 00:28:46.536 [2024-07-21 11:52:15.541712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x182e00 00:28:46.536 [2024-07-21 11:52:15.541731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.536 [2024-07-21 11:52:15.541750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x182e00 00:28:46.536 [2024-07-21 11:52:15.541772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.536 [2024-07-21 11:52:15.541782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.536 [2024-07-21 11:52:15.541791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.537 [2024-07-21 11:52:15.541802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183f00 00:28:46.537 [2024-07-21 11:52:15.541810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.537 [2024-07-21 11:52:15.541820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x182e00 00:28:46.537 [2024-07-21 11:52:15.541829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.537 [2024-07-21 11:52:15.541839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183f00 00:28:46.537 [2024-07-21 11:52:15.541848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.537 [2024-07-21 11:52:15.541858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bcc00 len:0x1000 key:0x182e00 00:28:46.537 [2024-07-21 11:52:15.541867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.537 [2024-07-21 11:52:15.541877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.537 [2024-07-21 11:52:15.541886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.537 [2024-07-21 11:52:15.541896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183f00 00:28:46.537 [2024-07-21 11:52:15.541905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.537 [2024-07-21 11:52:15.541915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183f00 00:28:46.537 [2024-07-21 11:52:15.541924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.537 [2024-07-21 11:52:15.541935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183f00 00:28:46.537 [2024-07-21 11:52:15.541944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.537 [2024-07-21 11:52:15.541954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b7980 len:0x1000 key:0x182e00 00:28:46.537 [2024-07-21 11:52:15.541963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.537 [2024-07-21 11:52:15.541973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b6900 len:0x1000 key:0x182e00 00:28:46.537 [2024-07-21 11:52:15.541983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.537 [2024-07-21 11:52:15.541994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183f00 00:28:46.537 [2024-07-21 11:52:15.542002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3a025000 sqhd:5310 p:0 m:0 dnr:0 00:28:46.537 [2024-07-21 11:52:15.543953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.537 [2024-07-21 11:52:15.543967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.537 [2024-07-21 11:52:15.543976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12144 len:8 PRP1 0x0 PRP2 0x0 00:28:46.537 [2024-07-21 11:52:15.543985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.537 [2024-07-21 11:52:15.544026] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:28:46.537 [2024-07-21 11:52:15.545843] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.537 [2024-07-21 11:52:15.559499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:46.537 [2024-07-21 11:52:15.562222] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:46.537 [2024-07-21 11:52:15.562240] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:46.537 [2024-07-21 11:52:15.562248] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:47.475 [2024-07-21 11:52:16.566307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:47.475 [2024-07-21 11:52:16.566367] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.475 [2024-07-21 11:52:16.566720] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.475 [2024-07-21 11:52:16.566756] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.475 [2024-07-21 11:52:16.566787] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:47.475 [2024-07-21 11:52:16.566929] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:47.475 [2024-07-21 11:52:16.568581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.475 [2024-07-21 11:52:16.578616] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.475 [2024-07-21 11:52:16.581397] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:47.475 [2024-07-21 11:52:16.581452] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:47.475 [2024-07-21 11:52:16.581480] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:48.413 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2513852 Killed "${NVMF_APP[@]}" "$@" 00:28:48.413 11:52:17 -- host/bdevperf.sh@36 -- # tgt_init 00:28:48.413 11:52:17 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:48.413 11:52:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:48.413 11:52:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:48.413 11:52:17 -- common/autotest_common.sh@10 -- # set +x 00:28:48.413 11:52:17 -- nvmf/common.sh@469 -- # nvmfpid=2515264 00:28:48.413 11:52:17 -- nvmf/common.sh@470 -- # waitforlisten 2515264 00:28:48.413 11:52:17 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:48.413 11:52:17 -- common/autotest_common.sh@819 -- # '[' -z 2515264 ']' 00:28:48.413 11:52:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.413 11:52:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:48.413 11:52:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.413 11:52:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:48.413 11:52:17 -- common/autotest_common.sh@10 -- # set +x 00:28:48.413 [2024-07-21 11:52:17.575442] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:48.413 [2024-07-21 11:52:17.575489] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.413 [2024-07-21 11:52:17.585369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:48.413 [2024-07-21 11:52:17.585391] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.413 [2024-07-21 11:52:17.585506] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.413 [2024-07-21 11:52:17.585516] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.413 [2024-07-21 11:52:17.585526] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:48.413 [2024-07-21 11:52:17.586143] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:48.413 [2024-07-21 11:52:17.587257] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.413 [2024-07-21 11:52:17.597903] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.413 [2024-07-21 11:52:17.599979] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:48.413 [2024-07-21 11:52:17.599998] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:48.413 [2024-07-21 11:52:17.600006] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:48.413 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.413 [2024-07-21 11:52:17.662414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:48.413 [2024-07-21 11:52:17.700691] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:48.413 [2024-07-21 11:52:17.700796] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.413 [2024-07-21 11:52:17.700808] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.413 [2024-07-21 11:52:17.700819] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.413 [2024-07-21 11:52:17.700856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.413 [2024-07-21 11:52:17.700884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.413 [2024-07-21 11:52:17.700885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.980 11:52:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:48.980 11:52:18 -- common/autotest_common.sh@852 -- # return 0 00:28:48.980 11:52:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:48.980 11:52:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:48.980 11:52:18 -- common/autotest_common.sh@10 -- # set +x 00:28:49.238 11:52:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.238 11:52:18 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:49.238 11:52:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:49.238 11:52:18 -- common/autotest_common.sh@10 -- # set +x 00:28:49.238 [2024-07-21 11:52:18.459164] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2329cd0/0x232e1c0) succeed. 00:28:49.238 [2024-07-21 11:52:18.469357] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x232b220/0x236f850) succeed. 00:28:49.238 11:52:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.238 11:52:18 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:49.238 11:52:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:49.238 11:52:18 -- common/autotest_common.sh@10 -- # set +x 00:28:49.238 Malloc0 00:28:49.238 11:52:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.239 11:52:18 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:49.239 11:52:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:49.239 11:52:18 -- common/autotest_common.sh@10 -- # set +x 00:28:49.239 [2024-07-21 11:52:18.603959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:49.239 [2024-07-21 11:52:18.603989] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.239 [2024-07-21 11:52:18.604120] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.239 [2024-07-21 11:52:18.604131] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.239 [2024-07-21 11:52:18.604141] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:49.239 [2024-07-21 11:52:18.605051] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:49.239 [2024-07-21 11:52:18.605993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.239 11:52:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.239 11:52:18 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:49.239 11:52:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:49.239 11:52:18 -- common/autotest_common.sh@10 -- # set +x 00:28:49.239 11:52:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.239 11:52:18 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:49.239 11:52:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:49.239 11:52:18 -- common/autotest_common.sh@10 -- # set +x 00:28:49.239 [2024-07-21 11:52:18.617081] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.239 [2024-07-21 11:52:18.617594] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:49.239 11:52:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.239 11:52:18 -- host/bdevperf.sh@38 -- # wait 2514183 00:28:49.239 [2024-07-21 11:52:18.649128] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:59.204 00:28:59.204 Latency(us) 00:28:59.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.204 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:59.204 Verification LBA range: start 0x0 length 0x4000 00:28:59.204 Nvme1n1 : 15.00 18312.00 71.53 16653.29 0.00 3649.47 498.07 1026765.62 00:28:59.204 =================================================================================================================== 00:28:59.204 Total : 18312.00 71.53 16653.29 0.00 3649.47 498.07 1026765.62 00:28:59.204 11:52:27 -- host/bdevperf.sh@39 -- # sync 00:28:59.204 11:52:27 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:59.204 11:52:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:59.204 11:52:27 -- common/autotest_common.sh@10 -- # set +x 00:28:59.204 11:52:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:59.204 11:52:27 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:59.204 11:52:27 -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:59.204 11:52:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:59.204 11:52:27 -- nvmf/common.sh@116 -- # sync 00:28:59.204 11:52:27 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:59.204 11:52:27 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:59.204 11:52:27 -- nvmf/common.sh@119 -- # set +e 00:28:59.204 11:52:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:59.204 11:52:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:59.204 rmmod nvme_rdma 00:28:59.204 rmmod nvme_fabrics 00:28:59.204 11:52:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:59.204 11:52:27 -- nvmf/common.sh@123 -- # set -e 00:28:59.204 11:52:27 -- nvmf/common.sh@124 -- # return 0 00:28:59.204 11:52:27 -- nvmf/common.sh@477 -- # '[' -n 2515264 ']' 00:28:59.204 11:52:27 -- nvmf/common.sh@478 -- # killprocess 2515264 00:28:59.204 11:52:27 -- common/autotest_common.sh@926 -- # '[' -z 2515264 ']' 00:28:59.204 11:52:27 -- common/autotest_common.sh@930 -- # kill -0 2515264 00:28:59.204 11:52:27 -- common/autotest_common.sh@931 -- # uname 00:28:59.204 11:52:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:59.204 11:52:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2515264 00:28:59.204 11:52:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:59.204 11:52:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:59.204 11:52:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2515264' 00:28:59.204 killing process with pid 2515264 00:28:59.204 11:52:27 -- common/autotest_common.sh@945 -- # kill 2515264 00:28:59.204 11:52:27 -- common/autotest_common.sh@950 -- # wait 2515264 00:28:59.204 11:52:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:59.204 11:52:27 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:59.204 00:28:59.204 real 0m26.832s 00:28:59.204 user 1m4.722s 00:28:59.204 sys 0m7.418s 00:28:59.204 11:52:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:59.204 11:52:27 -- common/autotest_common.sh@10 -- # set +x 00:28:59.204 ************************************ 00:28:59.204 END TEST nvmf_bdevperf 00:28:59.204 ************************************ 00:28:59.204 11:52:27 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:28:59.204 11:52:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:59.204 11:52:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:59.204 11:52:27 -- common/autotest_common.sh@10 -- # set +x 00:28:59.204 ************************************ 00:28:59.204 START TEST nvmf_target_disconnect 00:28:59.204 ************************************ 00:28:59.204 11:52:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:28:59.204 * Looking for test storage... 00:28:59.204 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:59.204 11:52:27 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.204 11:52:27 -- nvmf/common.sh@7 -- # uname -s 00:28:59.204 11:52:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.204 11:52:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.204 11:52:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.204 11:52:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.204 11:52:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.204 11:52:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.204 11:52:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.204 11:52:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.204 11:52:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.204 11:52:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.204 11:52:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:59.204 11:52:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:59.204 11:52:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.204 11:52:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.204 11:52:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.204 11:52:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:59.204 11:52:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.204 11:52:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.204 11:52:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.204 11:52:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.204 11:52:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.204 11:52:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.204 11:52:27 -- paths/export.sh@5 -- # export PATH 00:28:59.204 11:52:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.204 11:52:27 -- nvmf/common.sh@46 -- # : 0 00:28:59.204 11:52:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:59.204 11:52:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:59.204 11:52:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:59.204 11:52:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.204 11:52:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.204 11:52:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:59.204 11:52:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:59.204 11:52:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:59.204 11:52:27 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:28:59.204 11:52:27 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:59.204 11:52:27 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:59.204 11:52:27 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:28:59.204 11:52:27 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:59.204 11:52:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.205 11:52:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:59.205 11:52:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:59.205 11:52:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:59.205 11:52:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.205 11:52:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:59.205 11:52:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.205 11:52:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:59.205 11:52:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:59.205 11:52:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:59.205 11:52:27 -- common/autotest_common.sh@10 -- # set +x 00:29:07.310 11:52:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:07.310 11:52:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:07.310 11:52:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:07.310 11:52:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:07.310 11:52:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:07.310 11:52:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:07.310 11:52:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:07.310 11:52:35 -- nvmf/common.sh@294 -- # net_devs=() 00:29:07.310 11:52:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:07.310 11:52:35 -- nvmf/common.sh@295 -- # e810=() 00:29:07.310 11:52:35 -- nvmf/common.sh@295 -- # local -ga e810 00:29:07.310 11:52:35 -- nvmf/common.sh@296 -- # x722=() 00:29:07.310 11:52:35 -- nvmf/common.sh@296 -- # local -ga x722 00:29:07.310 11:52:35 -- nvmf/common.sh@297 -- # mlx=() 00:29:07.310 11:52:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:07.310 11:52:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.310 11:52:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.310 11:52:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.310 11:52:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.310 11:52:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.310 11:52:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.310 11:52:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.311 11:52:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.311 11:52:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.311 11:52:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.311 11:52:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.311 11:52:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:07.311 11:52:35 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:29:07.311 11:52:35 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:29:07.311 11:52:35 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:29:07.311 11:52:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:07.311 11:52:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:07.311 11:52:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:07.311 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:07.311 11:52:35 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:07.311 11:52:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:07.311 11:52:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:07.311 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:07.311 11:52:35 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:07.311 11:52:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:07.311 11:52:35 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:07.311 11:52:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.311 11:52:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:07.311 11:52:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.311 11:52:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:07.311 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:07.311 11:52:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.311 11:52:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:07.311 11:52:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.311 11:52:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:07.311 11:52:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.311 11:52:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:07.311 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:07.311 11:52:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.311 11:52:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:07.311 11:52:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:07.311 11:52:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@408 -- # rdma_device_init 00:29:07.311 11:52:35 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:29:07.311 11:52:35 -- nvmf/common.sh@57 -- # uname 00:29:07.311 11:52:35 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:29:07.311 11:52:35 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:29:07.311 11:52:35 -- nvmf/common.sh@62 -- # modprobe ib_core 00:29:07.311 11:52:35 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:29:07.311 11:52:35 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:29:07.311 11:52:35 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:29:07.311 11:52:35 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:29:07.311 11:52:35 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:29:07.311 11:52:35 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:29:07.311 11:52:35 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:07.311 11:52:35 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:29:07.311 11:52:35 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:07.311 11:52:35 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:07.311 11:52:35 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:07.311 11:52:35 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:07.311 11:52:35 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:07.311 11:52:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:07.311 11:52:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.311 11:52:35 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:07.311 11:52:35 -- nvmf/common.sh@104 -- # continue 2 00:29:07.311 11:52:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:07.311 11:52:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.311 11:52:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.311 11:52:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:07.311 11:52:35 -- nvmf/common.sh@104 -- # continue 2 00:29:07.311 11:52:35 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:07.311 11:52:35 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:29:07.311 11:52:35 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:07.311 11:52:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:07.311 11:52:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:07.311 11:52:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:07.311 11:52:35 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:29:07.311 11:52:35 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:29:07.311 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:07.311 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:07.311 altname enp217s0f0np0 00:29:07.311 altname ens818f0np0 00:29:07.311 inet 192.168.100.8/24 scope global mlx_0_0 00:29:07.311 valid_lft forever preferred_lft forever 00:29:07.311 11:52:35 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:07.311 11:52:35 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:29:07.311 11:52:35 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:07.311 11:52:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:07.311 11:52:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:07.311 11:52:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:07.311 11:52:35 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:29:07.311 11:52:35 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:29:07.311 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:07.311 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:07.311 altname enp217s0f1np1 00:29:07.311 altname ens818f1np1 00:29:07.311 inet 192.168.100.9/24 scope global mlx_0_1 00:29:07.311 valid_lft forever preferred_lft forever 00:29:07.311 11:52:35 -- nvmf/common.sh@410 -- # return 0 00:29:07.311 11:52:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:07.311 11:52:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:07.311 11:52:35 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:29:07.311 11:52:35 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:29:07.311 11:52:35 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:07.311 11:52:35 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:07.311 11:52:35 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:07.311 11:52:35 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:07.311 11:52:35 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:07.311 11:52:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:07.311 11:52:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.311 11:52:35 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:07.311 11:52:35 -- nvmf/common.sh@104 -- # continue 2 00:29:07.311 11:52:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:07.311 11:52:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.311 11:52:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.311 11:52:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:07.311 11:52:35 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:07.311 11:52:35 -- nvmf/common.sh@104 -- # continue 2 00:29:07.311 11:52:35 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:07.311 11:52:35 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:29:07.311 11:52:35 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:07.311 11:52:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:07.311 11:52:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:07.311 11:52:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:07.311 11:52:35 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:07.311 11:52:35 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:29:07.311 11:52:35 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:07.311 11:52:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:07.311 11:52:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:07.311 11:52:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:07.311 11:52:35 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:29:07.311 192.168.100.9' 00:29:07.311 11:52:35 -- nvmf/common.sh@445 -- # head -n 1 00:29:07.311 11:52:35 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:29:07.311 192.168.100.9' 00:29:07.311 11:52:35 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:07.311 11:52:35 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:29:07.311 192.168.100.9' 00:29:07.311 11:52:35 -- nvmf/common.sh@446 -- # tail -n +2 00:29:07.311 11:52:35 -- nvmf/common.sh@446 -- # head -n 1 00:29:07.311 11:52:35 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:07.311 11:52:35 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:29:07.311 11:52:35 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:07.312 11:52:35 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:29:07.312 11:52:35 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:29:07.312 11:52:35 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:29:07.312 11:52:35 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:07.312 11:52:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:07.312 11:52:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:07.312 11:52:35 -- common/autotest_common.sh@10 -- # set +x 00:29:07.312 ************************************ 00:29:07.312 START TEST nvmf_target_disconnect_tc1 00:29:07.312 ************************************ 00:29:07.312 11:52:35 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:29:07.312 11:52:35 -- host/target_disconnect.sh@32 -- # set +e 00:29:07.312 11:52:35 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:07.312 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.312 [2024-07-21 11:52:36.067270] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:07.312 [2024-07-21 11:52:36.067392] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:07.312 [2024-07-21 11:52:36.067434] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d70c0 00:29:07.878 [2024-07-21 11:52:37.071478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:07.878 [2024-07-21 11:52:37.071535] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:29:07.878 [2024-07-21 11:52:37.071577] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:29:07.878 [2024-07-21 11:52:37.071643] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:07.878 [2024-07-21 11:52:37.071674] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:07.878 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:29:07.878 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:07.878 Initializing NVMe Controllers 00:29:07.878 11:52:37 -- host/target_disconnect.sh@33 -- # trap - ERR 00:29:07.878 11:52:37 -- host/target_disconnect.sh@33 -- # print_backtrace 00:29:07.878 11:52:37 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:29:07.878 11:52:37 -- common/autotest_common.sh@1132 -- # return 0 00:29:07.878 11:52:37 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:29:07.878 11:52:37 -- host/target_disconnect.sh@41 -- # set -e 00:29:07.878 00:29:07.878 real 0m1.141s 00:29:07.878 user 0m0.857s 00:29:07.878 sys 0m0.273s 00:29:07.878 11:52:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:07.878 11:52:37 -- common/autotest_common.sh@10 -- # set +x 00:29:07.878 ************************************ 00:29:07.878 END TEST nvmf_target_disconnect_tc1 00:29:07.878 ************************************ 00:29:07.878 11:52:37 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:07.878 11:52:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:07.878 11:52:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:07.878 11:52:37 -- common/autotest_common.sh@10 -- # set +x 00:29:07.878 ************************************ 00:29:07.878 START TEST nvmf_target_disconnect_tc2 00:29:07.878 ************************************ 00:29:07.878 11:52:37 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:29:07.878 11:52:37 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:29:07.878 11:52:37 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:07.878 11:52:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:07.878 11:52:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:07.878 11:52:37 -- common/autotest_common.sh@10 -- # set +x 00:29:07.878 11:52:37 -- nvmf/common.sh@469 -- # nvmfpid=2521111 00:29:07.878 11:52:37 -- nvmf/common.sh@470 -- # waitforlisten 2521111 00:29:07.879 11:52:37 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:07.879 11:52:37 -- common/autotest_common.sh@819 -- # '[' -z 2521111 ']' 00:29:07.879 11:52:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.879 11:52:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:07.879 11:52:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.879 11:52:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:07.879 11:52:37 -- common/autotest_common.sh@10 -- # set +x 00:29:07.879 [2024-07-21 11:52:37.184604] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:07.879 [2024-07-21 11:52:37.184663] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.879 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.879 [2024-07-21 11:52:37.283438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:08.137 [2024-07-21 11:52:37.321605] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:08.137 [2024-07-21 11:52:37.321719] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.137 [2024-07-21 11:52:37.321729] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.137 [2024-07-21 11:52:37.321737] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.137 [2024-07-21 11:52:37.321848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:08.137 [2024-07-21 11:52:37.321958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:08.137 [2024-07-21 11:52:37.322046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:08.137 [2024-07-21 11:52:37.322047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:08.702 11:52:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:08.702 11:52:37 -- common/autotest_common.sh@852 -- # return 0 00:29:08.702 11:52:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:08.702 11:52:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:08.702 11:52:37 -- common/autotest_common.sh@10 -- # set +x 00:29:08.702 11:52:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.702 11:52:38 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:08.702 11:52:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:08.702 11:52:38 -- common/autotest_common.sh@10 -- # set +x 00:29:08.702 Malloc0 00:29:08.702 11:52:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:08.702 11:52:38 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:08.702 11:52:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:08.702 11:52:38 -- common/autotest_common.sh@10 -- # set +x 00:29:08.702 [2024-07-21 11:52:38.076123] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x111e7d0/0x112ab40) succeed. 00:29:08.702 [2024-07-21 11:52:38.086522] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x111fdc0/0x11cac40) succeed. 00:29:08.960 11:52:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:08.960 11:52:38 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:08.960 11:52:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:08.960 11:52:38 -- common/autotest_common.sh@10 -- # set +x 00:29:08.960 11:52:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:08.960 11:52:38 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:08.960 11:52:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:08.960 11:52:38 -- common/autotest_common.sh@10 -- # set +x 00:29:08.961 11:52:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:08.961 11:52:38 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:08.961 11:52:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:08.961 11:52:38 -- common/autotest_common.sh@10 -- # set +x 00:29:08.961 [2024-07-21 11:52:38.230213] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:08.961 11:52:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:08.961 11:52:38 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:08.961 11:52:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:08.961 11:52:38 -- common/autotest_common.sh@10 -- # set +x 00:29:08.961 11:52:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:08.961 11:52:38 -- host/target_disconnect.sh@50 -- # reconnectpid=2521395 00:29:08.961 11:52:38 -- host/target_disconnect.sh@52 -- # sleep 2 00:29:08.961 11:52:38 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:08.961 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.933 11:52:40 -- host/target_disconnect.sh@53 -- # kill -9 2521111 00:29:10.933 11:52:40 -- host/target_disconnect.sh@55 -- # sleep 2 00:29:12.311 Read completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Write completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Write completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Write completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Write completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Write completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Read completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Write completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Write completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Write completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Read completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Write completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Write completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Read completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Read completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Read completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Read completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Write completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Write completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Read completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Read completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Read completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Read completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Read completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Read completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Write completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Write completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Read completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Write completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Read completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Write completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 Write completed with error (sct=0, sc=8) 00:29:12.311 starting I/O failed 00:29:12.311 [2024-07-21 11:52:41.440459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.879 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2521111 Killed "${NVMF_APP[@]}" "$@" 00:29:12.879 11:52:42 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:29:12.879 11:52:42 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:12.879 11:52:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:12.879 11:52:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:12.879 11:52:42 -- common/autotest_common.sh@10 -- # set +x 00:29:12.879 11:52:42 -- nvmf/common.sh@469 -- # nvmfpid=2521972 00:29:12.879 11:52:42 -- nvmf/common.sh@470 -- # waitforlisten 2521972 00:29:12.879 11:52:42 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:12.879 11:52:42 -- common/autotest_common.sh@819 -- # '[' -z 2521972 ']' 00:29:12.879 11:52:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.879 11:52:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:12.879 11:52:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.879 11:52:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:12.879 11:52:42 -- common/autotest_common.sh@10 -- # set +x 00:29:13.139 [2024-07-21 11:52:42.310386] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:13.139 [2024-07-21 11:52:42.310435] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.139 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.139 [2024-07-21 11:52:42.411837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:13.139 Read completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Read completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Read completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Read completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Read completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Read completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Read completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Read completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Read completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Read completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Read completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Read completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Read completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 Write completed with error (sct=0, sc=8) 00:29:13.139 starting I/O failed 00:29:13.139 [2024-07-21 11:52:42.445660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.139 [2024-07-21 11:52:42.449843] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:13.139 [2024-07-21 11:52:42.449941] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.139 [2024-07-21 11:52:42.449951] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.139 [2024-07-21 11:52:42.449961] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.139 [2024-07-21 11:52:42.450076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:13.139 [2024-07-21 11:52:42.450186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:13.139 [2024-07-21 11:52:42.450294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:13.139 [2024-07-21 11:52:42.450294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:13.710 11:52:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:13.710 11:52:43 -- common/autotest_common.sh@852 -- # return 0 00:29:13.710 11:52:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:13.710 11:52:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:13.710 11:52:43 -- common/autotest_common.sh@10 -- # set +x 00:29:13.969 11:52:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.969 11:52:43 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:13.969 11:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.969 11:52:43 -- common/autotest_common.sh@10 -- # set +x 00:29:13.969 Malloc0 00:29:13.969 11:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.969 11:52:43 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:13.969 11:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.969 11:52:43 -- common/autotest_common.sh@10 -- # set +x 00:29:13.969 [2024-07-21 11:52:43.196541] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16527d0/0x165eb40) succeed. 00:29:13.969 [2024-07-21 11:52:43.208328] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1653dc0/0x16fec40) succeed. 00:29:13.969 11:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.969 11:52:43 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:13.969 11:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.969 11:52:43 -- common/autotest_common.sh@10 -- # set +x 00:29:13.969 11:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.969 11:52:43 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:13.969 11:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.969 11:52:43 -- common/autotest_common.sh@10 -- # set +x 00:29:13.969 11:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.969 11:52:43 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:13.969 11:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.969 11:52:43 -- common/autotest_common.sh@10 -- # set +x 00:29:13.969 [2024-07-21 11:52:43.356458] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:13.969 11:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.969 11:52:43 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:13.969 11:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.969 11:52:43 -- common/autotest_common.sh@10 -- # set +x 00:29:13.969 11:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.969 11:52:43 -- host/target_disconnect.sh@58 -- # wait 2521395 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Write completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Write completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Write completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Write completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Write completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Write completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Write completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Write completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Write completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Write completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Write completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Write completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Write completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 Read completed with error (sct=0, sc=8) 00:29:14.227 starting I/O failed 00:29:14.227 [2024-07-21 11:52:43.450648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.227 [2024-07-21 11:52:43.456545] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.227 [2024-07-21 11:52:43.456599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.227 [2024-07-21 11:52:43.456620] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.227 [2024-07-21 11:52:43.456636] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.227 [2024-07-21 11:52:43.456653] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.227 [2024-07-21 11:52:43.466674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.227 qpair failed and we were unable to recover it. 00:29:14.227 [2024-07-21 11:52:43.476497] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.227 [2024-07-21 11:52:43.476537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.227 [2024-07-21 11:52:43.476555] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.227 [2024-07-21 11:52:43.476565] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.227 [2024-07-21 11:52:43.476573] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.227 [2024-07-21 11:52:43.486965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.227 qpair failed and we were unable to recover it. 00:29:14.227 [2024-07-21 11:52:43.496507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.227 [2024-07-21 11:52:43.496546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.227 [2024-07-21 11:52:43.496563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.227 [2024-07-21 11:52:43.496573] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.227 [2024-07-21 11:52:43.496581] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.227 [2024-07-21 11:52:43.506883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.227 qpair failed and we were unable to recover it. 00:29:14.227 [2024-07-21 11:52:43.516516] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.227 [2024-07-21 11:52:43.516559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.227 [2024-07-21 11:52:43.516577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.227 [2024-07-21 11:52:43.516587] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.227 [2024-07-21 11:52:43.516596] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.227 [2024-07-21 11:52:43.527081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.227 qpair failed and we were unable to recover it. 00:29:14.227 [2024-07-21 11:52:43.536641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.227 [2024-07-21 11:52:43.536682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.227 [2024-07-21 11:52:43.536699] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.227 [2024-07-21 11:52:43.536708] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.227 [2024-07-21 11:52:43.536716] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.227 [2024-07-21 11:52:43.547151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.227 qpair failed and we were unable to recover it. 00:29:14.227 [2024-07-21 11:52:43.556725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.227 [2024-07-21 11:52:43.556769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.227 [2024-07-21 11:52:43.556787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.227 [2024-07-21 11:52:43.556796] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.227 [2024-07-21 11:52:43.556804] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.227 [2024-07-21 11:52:43.567225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.227 qpair failed and we were unable to recover it. 00:29:14.227 [2024-07-21 11:52:43.576744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.227 [2024-07-21 11:52:43.576782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.227 [2024-07-21 11:52:43.576802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.227 [2024-07-21 11:52:43.576812] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.227 [2024-07-21 11:52:43.576820] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.227 [2024-07-21 11:52:43.587317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.227 qpair failed and we were unable to recover it. 00:29:14.227 [2024-07-21 11:52:43.596816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.227 [2024-07-21 11:52:43.596856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.227 [2024-07-21 11:52:43.596872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.227 [2024-07-21 11:52:43.596882] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.227 [2024-07-21 11:52:43.596890] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.227 [2024-07-21 11:52:43.607339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.227 qpair failed and we were unable to recover it. 00:29:14.227 [2024-07-21 11:52:43.616974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.227 [2024-07-21 11:52:43.617020] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.227 [2024-07-21 11:52:43.617037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.227 [2024-07-21 11:52:43.617046] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.227 [2024-07-21 11:52:43.617055] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.227 [2024-07-21 11:52:43.627533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.227 qpair failed and we were unable to recover it. 00:29:14.227 [2024-07-21 11:52:43.637049] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.227 [2024-07-21 11:52:43.637087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.227 [2024-07-21 11:52:43.637103] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.227 [2024-07-21 11:52:43.637112] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.227 [2024-07-21 11:52:43.637121] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.227 [2024-07-21 11:52:43.647378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.227 qpair failed and we were unable to recover it. 00:29:14.483 [2024-07-21 11:52:43.657084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.483 [2024-07-21 11:52:43.657125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.483 [2024-07-21 11:52:43.657141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.483 [2024-07-21 11:52:43.657150] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.483 [2024-07-21 11:52:43.657163] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.483 [2024-07-21 11:52:43.667373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-07-21 11:52:43.677135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.483 [2024-07-21 11:52:43.677174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.483 [2024-07-21 11:52:43.677191] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.483 [2024-07-21 11:52:43.677200] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.483 [2024-07-21 11:52:43.677209] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.483 [2024-07-21 11:52:43.687595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-07-21 11:52:43.697229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.483 [2024-07-21 11:52:43.697267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.483 [2024-07-21 11:52:43.697283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.483 [2024-07-21 11:52:43.697292] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.483 [2024-07-21 11:52:43.697301] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.483 [2024-07-21 11:52:43.707483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-07-21 11:52:43.717294] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.483 [2024-07-21 11:52:43.717339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.484 [2024-07-21 11:52:43.717355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.484 [2024-07-21 11:52:43.717365] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.484 [2024-07-21 11:52:43.717373] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.484 [2024-07-21 11:52:43.727526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-07-21 11:52:43.737208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.484 [2024-07-21 11:52:43.737251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.484 [2024-07-21 11:52:43.737267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.484 [2024-07-21 11:52:43.737277] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.484 [2024-07-21 11:52:43.737285] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.484 [2024-07-21 11:52:43.747683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-07-21 11:52:43.757271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.484 [2024-07-21 11:52:43.757313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.484 [2024-07-21 11:52:43.757329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.484 [2024-07-21 11:52:43.757338] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.484 [2024-07-21 11:52:43.757347] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.484 [2024-07-21 11:52:43.767659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-07-21 11:52:43.777326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.484 [2024-07-21 11:52:43.777363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.484 [2024-07-21 11:52:43.777380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.484 [2024-07-21 11:52:43.777389] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.484 [2024-07-21 11:52:43.777398] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.484 [2024-07-21 11:52:43.787842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-07-21 11:52:43.797377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.484 [2024-07-21 11:52:43.797412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.484 [2024-07-21 11:52:43.797428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.484 [2024-07-21 11:52:43.797438] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.484 [2024-07-21 11:52:43.797446] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.484 [2024-07-21 11:52:43.807862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-07-21 11:52:43.817491] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.484 [2024-07-21 11:52:43.817524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.484 [2024-07-21 11:52:43.817541] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.484 [2024-07-21 11:52:43.817551] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.484 [2024-07-21 11:52:43.817559] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.484 [2024-07-21 11:52:43.827802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-07-21 11:52:43.837501] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.484 [2024-07-21 11:52:43.837541] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.484 [2024-07-21 11:52:43.837558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.484 [2024-07-21 11:52:43.837570] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.484 [2024-07-21 11:52:43.837581] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.484 [2024-07-21 11:52:43.847847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-07-21 11:52:43.857504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.484 [2024-07-21 11:52:43.857544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.484 [2024-07-21 11:52:43.857560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.484 [2024-07-21 11:52:43.857569] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.484 [2024-07-21 11:52:43.857578] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.484 [2024-07-21 11:52:43.867963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-07-21 11:52:43.877557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.484 [2024-07-21 11:52:43.877596] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.484 [2024-07-21 11:52:43.877612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.484 [2024-07-21 11:52:43.877622] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.484 [2024-07-21 11:52:43.877636] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.484 [2024-07-21 11:52:43.888041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-07-21 11:52:43.897657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.484 [2024-07-21 11:52:43.897702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.484 [2024-07-21 11:52:43.897718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.484 [2024-07-21 11:52:43.897727] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.484 [2024-07-21 11:52:43.897736] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.741 [2024-07-21 11:52:43.908089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-21 11:52:43.917675] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.741 [2024-07-21 11:52:43.917718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.741 [2024-07-21 11:52:43.917735] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.741 [2024-07-21 11:52:43.917744] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.741 [2024-07-21 11:52:43.917752] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.741 [2024-07-21 11:52:43.928063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-21 11:52:43.937746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.741 [2024-07-21 11:52:43.937784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.741 [2024-07-21 11:52:43.937800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.741 [2024-07-21 11:52:43.937809] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.741 [2024-07-21 11:52:43.937817] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.741 [2024-07-21 11:52:43.948056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.741 qpair failed and we were unable to recover it. 00:29:14.741 [2024-07-21 11:52:43.957845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.741 [2024-07-21 11:52:43.957887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.741 [2024-07-21 11:52:43.957905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.741 [2024-07-21 11:52:43.957914] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.741 [2024-07-21 11:52:43.957923] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.742 [2024-07-21 11:52:43.968162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-21 11:52:43.977893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.742 [2024-07-21 11:52:43.977936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.742 [2024-07-21 11:52:43.977954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.742 [2024-07-21 11:52:43.977963] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.742 [2024-07-21 11:52:43.977972] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.742 [2024-07-21 11:52:43.988278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-21 11:52:43.997959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.742 [2024-07-21 11:52:43.997995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.742 [2024-07-21 11:52:43.998012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.742 [2024-07-21 11:52:43.998021] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.742 [2024-07-21 11:52:43.998029] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.742 [2024-07-21 11:52:44.008454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-21 11:52:44.017987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.742 [2024-07-21 11:52:44.018032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.742 [2024-07-21 11:52:44.018053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.742 [2024-07-21 11:52:44.018062] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.742 [2024-07-21 11:52:44.018070] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.742 [2024-07-21 11:52:44.028393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-21 11:52:44.037977] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.742 [2024-07-21 11:52:44.038011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.742 [2024-07-21 11:52:44.038027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.742 [2024-07-21 11:52:44.038036] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.742 [2024-07-21 11:52:44.038045] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.742 [2024-07-21 11:52:44.048427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-21 11:52:44.058064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.742 [2024-07-21 11:52:44.058103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.742 [2024-07-21 11:52:44.058119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.742 [2024-07-21 11:52:44.058128] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.742 [2024-07-21 11:52:44.058137] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.742 [2024-07-21 11:52:44.068600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-21 11:52:44.078154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.742 [2024-07-21 11:52:44.078193] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.742 [2024-07-21 11:52:44.078209] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.742 [2024-07-21 11:52:44.078218] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.742 [2024-07-21 11:52:44.078226] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.742 [2024-07-21 11:52:44.088530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-21 11:52:44.098261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.742 [2024-07-21 11:52:44.098305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.742 [2024-07-21 11:52:44.098321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.742 [2024-07-21 11:52:44.098330] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.742 [2024-07-21 11:52:44.098339] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.742 [2024-07-21 11:52:44.108741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-21 11:52:44.118367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.742 [2024-07-21 11:52:44.118403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.742 [2024-07-21 11:52:44.118420] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.742 [2024-07-21 11:52:44.118429] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.742 [2024-07-21 11:52:44.118438] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.742 [2024-07-21 11:52:44.128738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-21 11:52:44.138350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.742 [2024-07-21 11:52:44.138387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.742 [2024-07-21 11:52:44.138404] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.742 [2024-07-21 11:52:44.138416] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.742 [2024-07-21 11:52:44.138425] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.742 [2024-07-21 11:52:44.148581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.742 qpair failed and we were unable to recover it. 00:29:14.742 [2024-07-21 11:52:44.158463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.742 [2024-07-21 11:52:44.158501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.742 [2024-07-21 11:52:44.158518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.742 [2024-07-21 11:52:44.158527] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.742 [2024-07-21 11:52:44.158536] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.000 [2024-07-21 11:52:44.168938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-07-21 11:52:44.178518] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.000 [2024-07-21 11:52:44.178559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.000 [2024-07-21 11:52:44.178576] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.000 [2024-07-21 11:52:44.178585] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.000 [2024-07-21 11:52:44.178594] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.000 [2024-07-21 11:52:44.188860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-07-21 11:52:44.198594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.000 [2024-07-21 11:52:44.198645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.000 [2024-07-21 11:52:44.198663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.000 [2024-07-21 11:52:44.198679] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.000 [2024-07-21 11:52:44.198688] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.000 [2024-07-21 11:52:44.209126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-07-21 11:52:44.218573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.000 [2024-07-21 11:52:44.218606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.000 [2024-07-21 11:52:44.218623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.000 [2024-07-21 11:52:44.218637] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.000 [2024-07-21 11:52:44.218646] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.000 [2024-07-21 11:52:44.229151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-07-21 11:52:44.238659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.000 [2024-07-21 11:52:44.238700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.000 [2024-07-21 11:52:44.238717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.000 [2024-07-21 11:52:44.238726] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.000 [2024-07-21 11:52:44.238735] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.000 [2024-07-21 11:52:44.249154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-07-21 11:52:44.258840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.000 [2024-07-21 11:52:44.258885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.000 [2024-07-21 11:52:44.258901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.000 [2024-07-21 11:52:44.258911] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.000 [2024-07-21 11:52:44.258919] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.001 [2024-07-21 11:52:44.269047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-07-21 11:52:44.278957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.001 [2024-07-21 11:52:44.278998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.001 [2024-07-21 11:52:44.279015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.001 [2024-07-21 11:52:44.279028] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.001 [2024-07-21 11:52:44.279036] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.001 [2024-07-21 11:52:44.289609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-07-21 11:52:44.298895] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.001 [2024-07-21 11:52:44.298933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.001 [2024-07-21 11:52:44.298950] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.001 [2024-07-21 11:52:44.298959] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.001 [2024-07-21 11:52:44.298968] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.001 [2024-07-21 11:52:44.309320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-07-21 11:52:44.318841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.001 [2024-07-21 11:52:44.318879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.001 [2024-07-21 11:52:44.318896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.001 [2024-07-21 11:52:44.318905] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.001 [2024-07-21 11:52:44.318913] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.001 [2024-07-21 11:52:44.329417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-07-21 11:52:44.338986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.001 [2024-07-21 11:52:44.339024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.001 [2024-07-21 11:52:44.339041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.001 [2024-07-21 11:52:44.339050] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.001 [2024-07-21 11:52:44.339059] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.001 [2024-07-21 11:52:44.349337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-07-21 11:52:44.359054] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.001 [2024-07-21 11:52:44.359094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.001 [2024-07-21 11:52:44.359110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.001 [2024-07-21 11:52:44.359120] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.001 [2024-07-21 11:52:44.359128] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.001 [2024-07-21 11:52:44.369467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-07-21 11:52:44.379159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.001 [2024-07-21 11:52:44.379195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.001 [2024-07-21 11:52:44.379212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.001 [2024-07-21 11:52:44.379221] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.001 [2024-07-21 11:52:44.379229] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.001 [2024-07-21 11:52:44.389403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-07-21 11:52:44.399170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.001 [2024-07-21 11:52:44.399209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.001 [2024-07-21 11:52:44.399225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.001 [2024-07-21 11:52:44.399234] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.001 [2024-07-21 11:52:44.399243] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.001 [2024-07-21 11:52:44.409863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-07-21 11:52:44.419152] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.001 [2024-07-21 11:52:44.419191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.001 [2024-07-21 11:52:44.419207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.001 [2024-07-21 11:52:44.419216] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.001 [2024-07-21 11:52:44.419225] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.259 [2024-07-21 11:52:44.429666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.259 qpair failed and we were unable to recover it. 00:29:15.259 [2024-07-21 11:52:44.439332] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.259 [2024-07-21 11:52:44.439372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.259 [2024-07-21 11:52:44.439388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.259 [2024-07-21 11:52:44.439397] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.259 [2024-07-21 11:52:44.439406] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.259 [2024-07-21 11:52:44.449681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.259 qpair failed and we were unable to recover it. 00:29:15.259 [2024-07-21 11:52:44.459372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.259 [2024-07-21 11:52:44.459412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.259 [2024-07-21 11:52:44.459431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.259 [2024-07-21 11:52:44.459440] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.259 [2024-07-21 11:52:44.459449] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.259 [2024-07-21 11:52:44.469695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.259 qpair failed and we were unable to recover it. 00:29:15.259 [2024-07-21 11:52:44.479632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.259 [2024-07-21 11:52:44.479670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.259 [2024-07-21 11:52:44.479686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.259 [2024-07-21 11:52:44.479695] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.259 [2024-07-21 11:52:44.479704] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.259 [2024-07-21 11:52:44.489783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.259 qpair failed and we were unable to recover it. 00:29:15.259 [2024-07-21 11:52:44.499609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.259 [2024-07-21 11:52:44.499657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.259 [2024-07-21 11:52:44.499675] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.259 [2024-07-21 11:52:44.499684] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.259 [2024-07-21 11:52:44.499692] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.259 [2024-07-21 11:52:44.509941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.259 qpair failed and we were unable to recover it. 00:29:15.259 [2024-07-21 11:52:44.519612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.259 [2024-07-21 11:52:44.519658] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.259 [2024-07-21 11:52:44.519676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.259 [2024-07-21 11:52:44.519685] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.259 [2024-07-21 11:52:44.519694] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.259 [2024-07-21 11:52:44.530030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.259 qpair failed and we were unable to recover it. 00:29:15.259 [2024-07-21 11:52:44.539732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.259 [2024-07-21 11:52:44.539770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.259 [2024-07-21 11:52:44.539786] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.259 [2024-07-21 11:52:44.539795] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.259 [2024-07-21 11:52:44.539804] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.259 [2024-07-21 11:52:44.549904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.259 qpair failed and we were unable to recover it. 00:29:15.259 [2024-07-21 11:52:44.559834] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.259 [2024-07-21 11:52:44.559876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.259 [2024-07-21 11:52:44.559894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.259 [2024-07-21 11:52:44.559904] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.259 [2024-07-21 11:52:44.559914] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.259 [2024-07-21 11:52:44.570020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.259 qpair failed and we were unable to recover it. 00:29:15.259 [2024-07-21 11:52:44.579823] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.260 [2024-07-21 11:52:44.579862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.260 [2024-07-21 11:52:44.579880] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.260 [2024-07-21 11:52:44.579889] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.260 [2024-07-21 11:52:44.579897] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.260 [2024-07-21 11:52:44.590113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-21 11:52:44.599923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.260 [2024-07-21 11:52:44.599964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.260 [2024-07-21 11:52:44.599980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.260 [2024-07-21 11:52:44.599989] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.260 [2024-07-21 11:52:44.599997] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.260 [2024-07-21 11:52:44.610196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-21 11:52:44.619971] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.260 [2024-07-21 11:52:44.620005] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.260 [2024-07-21 11:52:44.620021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.260 [2024-07-21 11:52:44.620031] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.260 [2024-07-21 11:52:44.620039] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.260 [2024-07-21 11:52:44.630267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-21 11:52:44.640054] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.260 [2024-07-21 11:52:44.640098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.260 [2024-07-21 11:52:44.640114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.260 [2024-07-21 11:52:44.640124] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.260 [2024-07-21 11:52:44.640133] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.260 [2024-07-21 11:52:44.650280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-21 11:52:44.659983] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.260 [2024-07-21 11:52:44.660023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.260 [2024-07-21 11:52:44.660039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.260 [2024-07-21 11:52:44.660048] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.260 [2024-07-21 11:52:44.660056] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.260 [2024-07-21 11:52:44.670229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.517 [2024-07-21 11:52:44.680158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.517 [2024-07-21 11:52:44.680201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.517 [2024-07-21 11:52:44.680218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.518 [2024-07-21 11:52:44.680228] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.518 [2024-07-21 11:52:44.680236] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.518 [2024-07-21 11:52:44.690459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.518 qpair failed and we were unable to recover it. 00:29:15.518 [2024-07-21 11:52:44.700084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.518 [2024-07-21 11:52:44.700121] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.518 [2024-07-21 11:52:44.700137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.518 [2024-07-21 11:52:44.700146] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.518 [2024-07-21 11:52:44.700154] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.518 [2024-07-21 11:52:44.710477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.518 qpair failed and we were unable to recover it. 00:29:15.518 [2024-07-21 11:52:44.720123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.518 [2024-07-21 11:52:44.720164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.518 [2024-07-21 11:52:44.720181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.518 [2024-07-21 11:52:44.720190] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.518 [2024-07-21 11:52:44.720204] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.518 [2024-07-21 11:52:44.730507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.518 qpair failed and we were unable to recover it. 00:29:15.518 [2024-07-21 11:52:44.740128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.518 [2024-07-21 11:52:44.740178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.518 [2024-07-21 11:52:44.740194] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.518 [2024-07-21 11:52:44.740203] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.518 [2024-07-21 11:52:44.740212] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.518 [2024-07-21 11:52:44.750573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.518 qpair failed and we were unable to recover it. 00:29:15.518 [2024-07-21 11:52:44.760221] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.518 [2024-07-21 11:52:44.760263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.518 [2024-07-21 11:52:44.760279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.518 [2024-07-21 11:52:44.760288] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.518 [2024-07-21 11:52:44.760296] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.518 [2024-07-21 11:52:44.770727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.518 qpair failed and we were unable to recover it. 00:29:15.518 [2024-07-21 11:52:44.780336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.518 [2024-07-21 11:52:44.780371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.518 [2024-07-21 11:52:44.780388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.518 [2024-07-21 11:52:44.780397] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.518 [2024-07-21 11:52:44.780405] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.518 [2024-07-21 11:52:44.790675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.518 qpair failed and we were unable to recover it. 00:29:15.518 [2024-07-21 11:52:44.800288] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.518 [2024-07-21 11:52:44.800329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.518 [2024-07-21 11:52:44.800345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.518 [2024-07-21 11:52:44.800355] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.518 [2024-07-21 11:52:44.800364] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.518 [2024-07-21 11:52:44.810775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.518 qpair failed and we were unable to recover it. 00:29:15.518 [2024-07-21 11:52:44.820427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.518 [2024-07-21 11:52:44.820468] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.518 [2024-07-21 11:52:44.820484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.518 [2024-07-21 11:52:44.820493] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.518 [2024-07-21 11:52:44.820502] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.518 [2024-07-21 11:52:44.830749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.518 qpair failed and we were unable to recover it. 00:29:15.518 [2024-07-21 11:52:44.840579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.518 [2024-07-21 11:52:44.840619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.518 [2024-07-21 11:52:44.840640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.518 [2024-07-21 11:52:44.840650] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.518 [2024-07-21 11:52:44.840658] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.518 [2024-07-21 11:52:44.850871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.518 qpair failed and we were unable to recover it. 00:29:15.518 [2024-07-21 11:52:44.860635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.518 [2024-07-21 11:52:44.860671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.518 [2024-07-21 11:52:44.860687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.518 [2024-07-21 11:52:44.860696] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.518 [2024-07-21 11:52:44.860705] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.518 [2024-07-21 11:52:44.870909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.518 qpair failed and we were unable to recover it. 00:29:15.518 [2024-07-21 11:52:44.880750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.518 [2024-07-21 11:52:44.880788] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.518 [2024-07-21 11:52:44.880804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.518 [2024-07-21 11:52:44.880813] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.518 [2024-07-21 11:52:44.880822] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.518 [2024-07-21 11:52:44.891150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.518 qpair failed and we were unable to recover it. 00:29:15.518 [2024-07-21 11:52:44.900698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.518 [2024-07-21 11:52:44.900741] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.518 [2024-07-21 11:52:44.900760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.518 [2024-07-21 11:52:44.900769] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.518 [2024-07-21 11:52:44.900778] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.518 [2024-07-21 11:52:44.911143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.518 qpair failed and we were unable to recover it. 00:29:15.518 [2024-07-21 11:52:44.920782] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.518 [2024-07-21 11:52:44.920818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.518 [2024-07-21 11:52:44.920834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.518 [2024-07-21 11:52:44.920843] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.518 [2024-07-21 11:52:44.920852] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.518 [2024-07-21 11:52:44.931410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.518 qpair failed and we were unable to recover it. 00:29:15.775 [2024-07-21 11:52:44.940830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.775 [2024-07-21 11:52:44.940871] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.775 [2024-07-21 11:52:44.940887] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.775 [2024-07-21 11:52:44.940896] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.775 [2024-07-21 11:52:44.940905] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.775 [2024-07-21 11:52:44.951083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.775 qpair failed and we were unable to recover it. 00:29:15.775 [2024-07-21 11:52:44.960819] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.775 [2024-07-21 11:52:44.960859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.775 [2024-07-21 11:52:44.960875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.775 [2024-07-21 11:52:44.960884] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.775 [2024-07-21 11:52:44.960893] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.775 [2024-07-21 11:52:44.971536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.775 qpair failed and we were unable to recover it. 00:29:15.775 [2024-07-21 11:52:44.980978] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.775 [2024-07-21 11:52:44.981017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.775 [2024-07-21 11:52:44.981034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.775 [2024-07-21 11:52:44.981043] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.775 [2024-07-21 11:52:44.981052] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.775 [2024-07-21 11:52:44.991374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.775 qpair failed and we were unable to recover it. 00:29:15.775 [2024-07-21 11:52:45.000997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.775 [2024-07-21 11:52:45.001033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.775 [2024-07-21 11:52:45.001050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.775 [2024-07-21 11:52:45.001059] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.775 [2024-07-21 11:52:45.001068] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.775 [2024-07-21 11:52:45.011387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.775 qpair failed and we were unable to recover it. 00:29:15.775 [2024-07-21 11:52:45.021057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.775 [2024-07-21 11:52:45.021098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.775 [2024-07-21 11:52:45.021115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.775 [2024-07-21 11:52:45.021124] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.775 [2024-07-21 11:52:45.021133] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.775 [2024-07-21 11:52:45.031427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.775 qpair failed and we were unable to recover it. 00:29:15.775 [2024-07-21 11:52:45.041093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.775 [2024-07-21 11:52:45.041129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.775 [2024-07-21 11:52:45.041145] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.775 [2024-07-21 11:52:45.041155] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.775 [2024-07-21 11:52:45.041163] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.775 [2024-07-21 11:52:45.051582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.775 qpair failed and we were unable to recover it. 00:29:15.775 [2024-07-21 11:52:45.061167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.775 [2024-07-21 11:52:45.061204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.775 [2024-07-21 11:52:45.061220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.775 [2024-07-21 11:52:45.061229] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.775 [2024-07-21 11:52:45.061237] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.775 [2024-07-21 11:52:45.071646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.775 qpair failed and we were unable to recover it. 00:29:15.775 [2024-07-21 11:52:45.081247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.775 [2024-07-21 11:52:45.081289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.775 [2024-07-21 11:52:45.081308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.775 [2024-07-21 11:52:45.081317] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.775 [2024-07-21 11:52:45.081325] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.775 [2024-07-21 11:52:45.091788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.775 qpair failed and we were unable to recover it. 00:29:15.775 [2024-07-21 11:52:45.101293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.775 [2024-07-21 11:52:45.101326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.775 [2024-07-21 11:52:45.101343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.775 [2024-07-21 11:52:45.101352] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.775 [2024-07-21 11:52:45.101361] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.775 [2024-07-21 11:52:45.111742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.775 qpair failed and we were unable to recover it. 00:29:15.775 [2024-07-21 11:52:45.121381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.775 [2024-07-21 11:52:45.121421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.775 [2024-07-21 11:52:45.121436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.775 [2024-07-21 11:52:45.121445] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.775 [2024-07-21 11:52:45.121454] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.775 [2024-07-21 11:52:45.131913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.775 qpair failed and we were unable to recover it. 00:29:15.775 [2024-07-21 11:52:45.141354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.775 [2024-07-21 11:52:45.141400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.775 [2024-07-21 11:52:45.141416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.775 [2024-07-21 11:52:45.141425] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.775 [2024-07-21 11:52:45.141434] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.775 [2024-07-21 11:52:45.151876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.775 qpair failed and we were unable to recover it. 00:29:15.775 [2024-07-21 11:52:45.161478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.775 [2024-07-21 11:52:45.161513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.775 [2024-07-21 11:52:45.161529] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.775 [2024-07-21 11:52:45.161538] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.775 [2024-07-21 11:52:45.161550] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.775 [2024-07-21 11:52:45.172051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.775 qpair failed and we were unable to recover it. 00:29:15.775 [2024-07-21 11:52:45.181494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.775 [2024-07-21 11:52:45.181533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.775 [2024-07-21 11:52:45.181550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.775 [2024-07-21 11:52:45.181559] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.775 [2024-07-21 11:52:45.181567] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.775 [2024-07-21 11:52:45.191857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.775 qpair failed and we were unable to recover it. 00:29:16.032 [2024-07-21 11:52:45.201571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.032 [2024-07-21 11:52:45.201612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.032 [2024-07-21 11:52:45.201633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.032 [2024-07-21 11:52:45.201642] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.032 [2024-07-21 11:52:45.201651] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.032 [2024-07-21 11:52:45.212041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-07-21 11:52:45.221620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.032 [2024-07-21 11:52:45.221668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.032 [2024-07-21 11:52:45.221684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.032 [2024-07-21 11:52:45.221693] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.032 [2024-07-21 11:52:45.221702] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.032 [2024-07-21 11:52:45.231861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-07-21 11:52:45.241693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.032 [2024-07-21 11:52:45.241732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.032 [2024-07-21 11:52:45.241747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.032 [2024-07-21 11:52:45.241757] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.032 [2024-07-21 11:52:45.241765] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.032 [2024-07-21 11:52:45.252294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-07-21 11:52:45.261779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.032 [2024-07-21 11:52:45.261815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.032 [2024-07-21 11:52:45.261832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.032 [2024-07-21 11:52:45.261846] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.032 [2024-07-21 11:52:45.261857] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.032 [2024-07-21 11:52:45.272194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-07-21 11:52:45.281807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.032 [2024-07-21 11:52:45.281843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.032 [2024-07-21 11:52:45.281860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.033 [2024-07-21 11:52:45.281869] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.033 [2024-07-21 11:52:45.281878] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.033 [2024-07-21 11:52:45.292431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-07-21 11:52:45.301871] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.033 [2024-07-21 11:52:45.301915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.033 [2024-07-21 11:52:45.301932] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.033 [2024-07-21 11:52:45.301941] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.033 [2024-07-21 11:52:45.301949] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.033 [2024-07-21 11:52:45.312236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-07-21 11:52:45.321963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.033 [2024-07-21 11:52:45.322005] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.033 [2024-07-21 11:52:45.322021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.033 [2024-07-21 11:52:45.322030] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.033 [2024-07-21 11:52:45.322039] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.033 [2024-07-21 11:52:45.332517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-07-21 11:52:45.342040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.033 [2024-07-21 11:52:45.342075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.033 [2024-07-21 11:52:45.342090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.033 [2024-07-21 11:52:45.342103] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.033 [2024-07-21 11:52:45.342112] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.033 [2024-07-21 11:52:45.352291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-07-21 11:52:45.362133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.033 [2024-07-21 11:52:45.362174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.033 [2024-07-21 11:52:45.362190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.033 [2024-07-21 11:52:45.362199] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.033 [2024-07-21 11:52:45.362208] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.033 [2024-07-21 11:52:45.372431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-07-21 11:52:45.382174] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.033 [2024-07-21 11:52:45.382212] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.033 [2024-07-21 11:52:45.382228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.033 [2024-07-21 11:52:45.382237] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.033 [2024-07-21 11:52:45.382246] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.033 [2024-07-21 11:52:45.392520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-07-21 11:52:45.402295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.033 [2024-07-21 11:52:45.402336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.033 [2024-07-21 11:52:45.402353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.033 [2024-07-21 11:52:45.402362] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.033 [2024-07-21 11:52:45.402371] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.033 [2024-07-21 11:52:45.412677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-07-21 11:52:45.422241] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.033 [2024-07-21 11:52:45.422274] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.033 [2024-07-21 11:52:45.422290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.033 [2024-07-21 11:52:45.422300] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.033 [2024-07-21 11:52:45.422310] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.033 [2024-07-21 11:52:45.432727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-07-21 11:52:45.442400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.033 [2024-07-21 11:52:45.442438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.033 [2024-07-21 11:52:45.442455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.033 [2024-07-21 11:52:45.442464] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.033 [2024-07-21 11:52:45.442473] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.033 [2024-07-21 11:52:45.452749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.290 [2024-07-21 11:52:45.462397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.290 [2024-07-21 11:52:45.462434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.290 [2024-07-21 11:52:45.462451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.290 [2024-07-21 11:52:45.462461] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.290 [2024-07-21 11:52:45.462469] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.290 [2024-07-21 11:52:45.472947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.290 qpair failed and we were unable to recover it. 00:29:16.290 [2024-07-21 11:52:45.482383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.290 [2024-07-21 11:52:45.482419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.290 [2024-07-21 11:52:45.482435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.290 [2024-07-21 11:52:45.482444] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.290 [2024-07-21 11:52:45.482453] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.290 [2024-07-21 11:52:45.492867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.290 qpair failed and we were unable to recover it. 00:29:16.290 [2024-07-21 11:52:45.502500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.290 [2024-07-21 11:52:45.502540] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.290 [2024-07-21 11:52:45.502556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.290 [2024-07-21 11:52:45.502565] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.290 [2024-07-21 11:52:45.502573] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.290 [2024-07-21 11:52:45.512818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.290 qpair failed and we were unable to recover it. 00:29:16.290 [2024-07-21 11:52:45.522670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.290 [2024-07-21 11:52:45.522709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.290 [2024-07-21 11:52:45.522728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.290 [2024-07-21 11:52:45.522737] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.290 [2024-07-21 11:52:45.522746] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.290 [2024-07-21 11:52:45.533057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.290 qpair failed and we were unable to recover it. 00:29:16.291 [2024-07-21 11:52:45.542581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.291 [2024-07-21 11:52:45.542622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.291 [2024-07-21 11:52:45.542644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.291 [2024-07-21 11:52:45.542653] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.291 [2024-07-21 11:52:45.542662] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.291 [2024-07-21 11:52:45.553043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.291 qpair failed and we were unable to recover it. 00:29:16.291 [2024-07-21 11:52:45.562668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.291 [2024-07-21 11:52:45.562711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.291 [2024-07-21 11:52:45.562728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.291 [2024-07-21 11:52:45.562737] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.291 [2024-07-21 11:52:45.562745] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.291 [2024-07-21 11:52:45.573473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.291 qpair failed and we were unable to recover it. 00:29:16.291 [2024-07-21 11:52:45.582695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.291 [2024-07-21 11:52:45.582736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.291 [2024-07-21 11:52:45.582752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.291 [2024-07-21 11:52:45.582761] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.291 [2024-07-21 11:52:45.582770] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.291 [2024-07-21 11:52:45.593154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.291 qpair failed and we were unable to recover it. 00:29:16.291 [2024-07-21 11:52:45.602840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.291 [2024-07-21 11:52:45.602879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.291 [2024-07-21 11:52:45.602895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.291 [2024-07-21 11:52:45.602904] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.291 [2024-07-21 11:52:45.602915] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.291 [2024-07-21 11:52:45.613212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.291 qpair failed and we were unable to recover it. 00:29:16.291 [2024-07-21 11:52:45.622903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.291 [2024-07-21 11:52:45.622948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.291 [2024-07-21 11:52:45.622964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.291 [2024-07-21 11:52:45.622974] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.291 [2024-07-21 11:52:45.622982] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.291 [2024-07-21 11:52:45.633269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.291 qpair failed and we were unable to recover it. 00:29:16.291 [2024-07-21 11:52:45.642847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.291 [2024-07-21 11:52:45.642889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.291 [2024-07-21 11:52:45.642905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.291 [2024-07-21 11:52:45.642914] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.291 [2024-07-21 11:52:45.642923] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.291 [2024-07-21 11:52:45.653291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.291 qpair failed and we were unable to recover it. 00:29:16.291 [2024-07-21 11:52:45.663046] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.291 [2024-07-21 11:52:45.663082] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.291 [2024-07-21 11:52:45.663098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.291 [2024-07-21 11:52:45.663108] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.291 [2024-07-21 11:52:45.663117] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.291 [2024-07-21 11:52:45.673384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.291 qpair failed and we were unable to recover it. 00:29:16.291 [2024-07-21 11:52:45.683018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.291 [2024-07-21 11:52:45.683059] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.291 [2024-07-21 11:52:45.683077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.291 [2024-07-21 11:52:45.683086] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.291 [2024-07-21 11:52:45.683095] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.291 [2024-07-21 11:52:45.693581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.291 qpair failed and we were unable to recover it. 00:29:16.291 [2024-07-21 11:52:45.703139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.291 [2024-07-21 11:52:45.703189] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.291 [2024-07-21 11:52:45.703207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.291 [2024-07-21 11:52:45.703216] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.291 [2024-07-21 11:52:45.703225] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.549 [2024-07-21 11:52:45.713534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.549 qpair failed and we were unable to recover it. 00:29:16.549 [2024-07-21 11:52:45.723114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.549 [2024-07-21 11:52:45.723154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.549 [2024-07-21 11:52:45.723171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.549 [2024-07-21 11:52:45.723180] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.549 [2024-07-21 11:52:45.723189] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.549 [2024-07-21 11:52:45.733798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.549 qpair failed and we were unable to recover it. 00:29:16.549 [2024-07-21 11:52:45.743249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.549 [2024-07-21 11:52:45.743287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.549 [2024-07-21 11:52:45.743303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.549 [2024-07-21 11:52:45.743312] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.549 [2024-07-21 11:52:45.743321] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.549 [2024-07-21 11:52:45.753655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.549 qpair failed and we were unable to recover it. 00:29:16.549 [2024-07-21 11:52:45.763284] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.549 [2024-07-21 11:52:45.763323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.549 [2024-07-21 11:52:45.763339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.549 [2024-07-21 11:52:45.763349] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.549 [2024-07-21 11:52:45.763358] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.550 [2024-07-21 11:52:45.773793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.550 qpair failed and we were unable to recover it. 00:29:16.550 [2024-07-21 11:52:45.783429] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.550 [2024-07-21 11:52:45.783474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.550 [2024-07-21 11:52:45.783491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.550 [2024-07-21 11:52:45.783503] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.550 [2024-07-21 11:52:45.783512] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.550 [2024-07-21 11:52:45.793719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.550 qpair failed and we were unable to recover it. 00:29:16.550 [2024-07-21 11:52:45.803321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.550 [2024-07-21 11:52:45.803355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.550 [2024-07-21 11:52:45.803371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.550 [2024-07-21 11:52:45.803380] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.550 [2024-07-21 11:52:45.803389] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.550 [2024-07-21 11:52:45.813845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.550 qpair failed and we were unable to recover it. 00:29:16.550 [2024-07-21 11:52:45.823392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.550 [2024-07-21 11:52:45.823427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.550 [2024-07-21 11:52:45.823443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.550 [2024-07-21 11:52:45.823452] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.550 [2024-07-21 11:52:45.823460] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.550 [2024-07-21 11:52:45.833849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.550 qpair failed and we were unable to recover it. 00:29:16.550 [2024-07-21 11:52:45.843501] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.550 [2024-07-21 11:52:45.843548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.550 [2024-07-21 11:52:45.843564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.550 [2024-07-21 11:52:45.843573] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.550 [2024-07-21 11:52:45.843581] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.550 [2024-07-21 11:52:45.854037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.550 qpair failed and we were unable to recover it. 00:29:16.550 [2024-07-21 11:52:45.863513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.550 [2024-07-21 11:52:45.863559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.550 [2024-07-21 11:52:45.863575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.550 [2024-07-21 11:52:45.863585] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.550 [2024-07-21 11:52:45.863594] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.550 [2024-07-21 11:52:45.874050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.550 qpair failed and we were unable to recover it. 00:29:16.550 [2024-07-21 11:52:45.883707] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.550 [2024-07-21 11:52:45.883742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.550 [2024-07-21 11:52:45.883758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.550 [2024-07-21 11:52:45.883767] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.550 [2024-07-21 11:52:45.883776] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.550 [2024-07-21 11:52:45.894161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.550 qpair failed and we were unable to recover it. 00:29:16.550 [2024-07-21 11:52:45.903691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.550 [2024-07-21 11:52:45.903733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.550 [2024-07-21 11:52:45.903749] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.550 [2024-07-21 11:52:45.903759] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.550 [2024-07-21 11:52:45.903768] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.550 [2024-07-21 11:52:45.914022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.550 qpair failed and we were unable to recover it. 00:29:16.550 [2024-07-21 11:52:45.923683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.550 [2024-07-21 11:52:45.923723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.550 [2024-07-21 11:52:45.923741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.550 [2024-07-21 11:52:45.923750] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.550 [2024-07-21 11:52:45.923759] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.550 [2024-07-21 11:52:45.934365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.550 qpair failed and we were unable to recover it. 00:29:16.550 [2024-07-21 11:52:45.943829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.550 [2024-07-21 11:52:45.943870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.550 [2024-07-21 11:52:45.943886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.550 [2024-07-21 11:52:45.943895] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.550 [2024-07-21 11:52:45.943903] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.550 [2024-07-21 11:52:45.954110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.550 qpair failed and we were unable to recover it. 00:29:16.550 [2024-07-21 11:52:45.963823] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.550 [2024-07-21 11:52:45.963864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.550 [2024-07-21 11:52:45.963884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.550 [2024-07-21 11:52:45.963893] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.550 [2024-07-21 11:52:45.963902] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.809 [2024-07-21 11:52:45.974189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.809 qpair failed and we were unable to recover it. 00:29:16.809 [2024-07-21 11:52:45.983946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.809 [2024-07-21 11:52:45.983987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.809 [2024-07-21 11:52:45.984003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.809 [2024-07-21 11:52:45.984012] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.809 [2024-07-21 11:52:45.984021] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.809 [2024-07-21 11:52:45.994360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.809 qpair failed and we were unable to recover it. 00:29:16.809 [2024-07-21 11:52:46.004036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.809 [2024-07-21 11:52:46.004075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.809 [2024-07-21 11:52:46.004092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.809 [2024-07-21 11:52:46.004101] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.809 [2024-07-21 11:52:46.004109] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.809 [2024-07-21 11:52:46.014325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.809 qpair failed and we were unable to recover it. 00:29:16.809 [2024-07-21 11:52:46.023917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.809 [2024-07-21 11:52:46.023965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.809 [2024-07-21 11:52:46.023981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.809 [2024-07-21 11:52:46.023990] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.809 [2024-07-21 11:52:46.023999] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.809 [2024-07-21 11:52:46.034519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.809 qpair failed and we were unable to recover it. 00:29:16.809 [2024-07-21 11:52:46.044063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.809 [2024-07-21 11:52:46.044105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.809 [2024-07-21 11:52:46.044121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.809 [2024-07-21 11:52:46.044131] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.809 [2024-07-21 11:52:46.044139] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.809 [2024-07-21 11:52:46.054729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.809 qpair failed and we were unable to recover it. 00:29:16.809 [2024-07-21 11:52:46.064174] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.810 [2024-07-21 11:52:46.064216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.810 [2024-07-21 11:52:46.064233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.810 [2024-07-21 11:52:46.064242] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.810 [2024-07-21 11:52:46.064250] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.810 [2024-07-21 11:52:46.074479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.810 qpair failed and we were unable to recover it. 00:29:16.810 [2024-07-21 11:52:46.084210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.810 [2024-07-21 11:52:46.084246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.810 [2024-07-21 11:52:46.084262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.810 [2024-07-21 11:52:46.084271] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.810 [2024-07-21 11:52:46.084280] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.810 [2024-07-21 11:52:46.094724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.810 qpair failed and we were unable to recover it. 00:29:16.810 [2024-07-21 11:52:46.104323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.810 [2024-07-21 11:52:46.104362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.810 [2024-07-21 11:52:46.104377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.810 [2024-07-21 11:52:46.104387] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.810 [2024-07-21 11:52:46.104395] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.810 [2024-07-21 11:52:46.114587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.810 qpair failed and we were unable to recover it. 00:29:16.810 [2024-07-21 11:52:46.124291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.810 [2024-07-21 11:52:46.124330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.810 [2024-07-21 11:52:46.124347] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.810 [2024-07-21 11:52:46.124356] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.810 [2024-07-21 11:52:46.124364] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.810 [2024-07-21 11:52:46.134758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.810 qpair failed and we were unable to recover it. 00:29:16.810 [2024-07-21 11:52:46.144374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.810 [2024-07-21 11:52:46.144414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.810 [2024-07-21 11:52:46.144430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.810 [2024-07-21 11:52:46.144439] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.810 [2024-07-21 11:52:46.144448] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.810 [2024-07-21 11:52:46.154782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.810 qpair failed and we were unable to recover it. 00:29:16.810 [2024-07-21 11:52:46.164527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.810 [2024-07-21 11:52:46.164569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.810 [2024-07-21 11:52:46.164585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.810 [2024-07-21 11:52:46.164594] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.810 [2024-07-21 11:52:46.164603] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.810 [2024-07-21 11:52:46.174897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.810 qpair failed and we were unable to recover it. 00:29:16.810 [2024-07-21 11:52:46.184606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.810 [2024-07-21 11:52:46.184646] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.810 [2024-07-21 11:52:46.184662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.810 [2024-07-21 11:52:46.184672] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.810 [2024-07-21 11:52:46.184680] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.810 [2024-07-21 11:52:46.194874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.810 qpair failed and we were unable to recover it. 00:29:16.810 [2024-07-21 11:52:46.204671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.810 [2024-07-21 11:52:46.204706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.810 [2024-07-21 11:52:46.204722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.810 [2024-07-21 11:52:46.204732] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.810 [2024-07-21 11:52:46.204740] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.810 [2024-07-21 11:52:46.215341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.810 qpair failed and we were unable to recover it. 00:29:16.810 [2024-07-21 11:52:46.224702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.810 [2024-07-21 11:52:46.224739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.810 [2024-07-21 11:52:46.224756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.810 [2024-07-21 11:52:46.224770] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.810 [2024-07-21 11:52:46.224779] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.069 [2024-07-21 11:52:46.234994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.069 qpair failed and we were unable to recover it. 00:29:17.069 [2024-07-21 11:52:46.244719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.069 [2024-07-21 11:52:46.244762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.069 [2024-07-21 11:52:46.244778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.069 [2024-07-21 11:52:46.244787] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.069 [2024-07-21 11:52:46.244796] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.069 [2024-07-21 11:52:46.255199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.069 qpair failed and we were unable to recover it. 00:29:17.069 [2024-07-21 11:52:46.264715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.069 [2024-07-21 11:52:46.264759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.069 [2024-07-21 11:52:46.264776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.069 [2024-07-21 11:52:46.264786] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.069 [2024-07-21 11:52:46.264794] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.069 [2024-07-21 11:52:46.275009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.069 qpair failed and we were unable to recover it. 00:29:17.069 [2024-07-21 11:52:46.284908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.069 [2024-07-21 11:52:46.284945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.069 [2024-07-21 11:52:46.284962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.069 [2024-07-21 11:52:46.284971] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.069 [2024-07-21 11:52:46.284979] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.069 [2024-07-21 11:52:46.295219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.069 qpair failed and we were unable to recover it. 00:29:17.069 [2024-07-21 11:52:46.304957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.069 [2024-07-21 11:52:46.305003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.069 [2024-07-21 11:52:46.305020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.069 [2024-07-21 11:52:46.305029] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.069 [2024-07-21 11:52:46.305038] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.069 [2024-07-21 11:52:46.315111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.069 qpair failed and we were unable to recover it. 00:29:17.069 [2024-07-21 11:52:46.324970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.069 [2024-07-21 11:52:46.325011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.069 [2024-07-21 11:52:46.325027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.069 [2024-07-21 11:52:46.325036] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.069 [2024-07-21 11:52:46.325045] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.069 [2024-07-21 11:52:46.335425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.069 qpair failed and we were unable to recover it. 00:29:17.069 [2024-07-21 11:52:46.345032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.069 [2024-07-21 11:52:46.345077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.069 [2024-07-21 11:52:46.345093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.069 [2024-07-21 11:52:46.345103] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.069 [2024-07-21 11:52:46.345112] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.069 [2024-07-21 11:52:46.355386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.069 qpair failed and we were unable to recover it. 00:29:17.069 [2024-07-21 11:52:46.365122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.069 [2024-07-21 11:52:46.365164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.069 [2024-07-21 11:52:46.365181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.069 [2024-07-21 11:52:46.365190] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.069 [2024-07-21 11:52:46.365199] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.069 [2024-07-21 11:52:46.375475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.069 qpair failed and we were unable to recover it. 00:29:17.069 [2024-07-21 11:52:46.385208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.069 [2024-07-21 11:52:46.385250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.069 [2024-07-21 11:52:46.385266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.069 [2024-07-21 11:52:46.385276] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.069 [2024-07-21 11:52:46.385284] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.069 [2024-07-21 11:52:46.395496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.069 qpair failed and we were unable to recover it. 00:29:17.069 [2024-07-21 11:52:46.405223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.069 [2024-07-21 11:52:46.405260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.069 [2024-07-21 11:52:46.405280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.069 [2024-07-21 11:52:46.405289] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.069 [2024-07-21 11:52:46.405298] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.069 [2024-07-21 11:52:46.415735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.069 qpair failed and we were unable to recover it. 00:29:17.069 [2024-07-21 11:52:46.425398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.069 [2024-07-21 11:52:46.425444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.069 [2024-07-21 11:52:46.425460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.069 [2024-07-21 11:52:46.425470] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.069 [2024-07-21 11:52:46.425478] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.069 [2024-07-21 11:52:46.435847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.069 qpair failed and we were unable to recover it. 00:29:17.069 [2024-07-21 11:52:46.445344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.069 [2024-07-21 11:52:46.445378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.069 [2024-07-21 11:52:46.445394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.069 [2024-07-21 11:52:46.445403] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.069 [2024-07-21 11:52:46.445412] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.069 [2024-07-21 11:52:46.455819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.069 qpair failed and we were unable to recover it. 00:29:17.069 [2024-07-21 11:52:46.465377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.069 [2024-07-21 11:52:46.465418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.069 [2024-07-21 11:52:46.465435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.069 [2024-07-21 11:52:46.465444] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.069 [2024-07-21 11:52:46.465453] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.069 [2024-07-21 11:52:46.475810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.069 qpair failed and we were unable to recover it. 00:29:17.069 [2024-07-21 11:52:46.485404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.069 [2024-07-21 11:52:46.485440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.069 [2024-07-21 11:52:46.485457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.069 [2024-07-21 11:52:46.485467] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.069 [2024-07-21 11:52:46.485476] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.327 [2024-07-21 11:52:46.495716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.327 qpair failed and we were unable to recover it. 00:29:17.327 [2024-07-21 11:52:46.505551] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.327 [2024-07-21 11:52:46.505590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.327 [2024-07-21 11:52:46.505607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.327 [2024-07-21 11:52:46.505616] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.327 [2024-07-21 11:52:46.505629] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.327 [2024-07-21 11:52:46.515969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.327 qpair failed and we were unable to recover it. 00:29:17.327 [2024-07-21 11:52:46.525588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.327 [2024-07-21 11:52:46.525632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.327 [2024-07-21 11:52:46.525650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.327 [2024-07-21 11:52:46.525659] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.327 [2024-07-21 11:52:46.525667] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.327 [2024-07-21 11:52:46.535957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.327 qpair failed and we were unable to recover it. 00:29:17.327 [2024-07-21 11:52:46.545719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.327 [2024-07-21 11:52:46.545761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.327 [2024-07-21 11:52:46.545778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.328 [2024-07-21 11:52:46.545787] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.328 [2024-07-21 11:52:46.545795] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.328 [2024-07-21 11:52:46.555812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.328 qpair failed and we were unable to recover it. 00:29:17.328 [2024-07-21 11:52:46.565801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.328 [2024-07-21 11:52:46.565840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.328 [2024-07-21 11:52:46.565856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.328 [2024-07-21 11:52:46.565865] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.328 [2024-07-21 11:52:46.565874] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.328 [2024-07-21 11:52:46.576138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.328 qpair failed and we were unable to recover it. 00:29:17.328 [2024-07-21 11:52:46.585806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.328 [2024-07-21 11:52:46.585852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.328 [2024-07-21 11:52:46.585869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.328 [2024-07-21 11:52:46.585878] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.328 [2024-07-21 11:52:46.585886] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.328 [2024-07-21 11:52:46.596159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.328 qpair failed and we were unable to recover it. 00:29:17.328 [2024-07-21 11:52:46.605800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.328 [2024-07-21 11:52:46.605841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.328 [2024-07-21 11:52:46.605858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.328 [2024-07-21 11:52:46.605867] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.328 [2024-07-21 11:52:46.605876] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.328 [2024-07-21 11:52:46.616225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.328 qpair failed and we were unable to recover it. 00:29:17.328 [2024-07-21 11:52:46.625862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.328 [2024-07-21 11:52:46.625903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.328 [2024-07-21 11:52:46.625919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.328 [2024-07-21 11:52:46.625928] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.328 [2024-07-21 11:52:46.625937] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.328 [2024-07-21 11:52:46.636285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.328 qpair failed and we were unable to recover it. 00:29:17.328 [2024-07-21 11:52:46.645973] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.328 [2024-07-21 11:52:46.646012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.328 [2024-07-21 11:52:46.646027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.328 [2024-07-21 11:52:46.646037] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.328 [2024-07-21 11:52:46.646045] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.328 [2024-07-21 11:52:46.656429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.328 qpair failed and we were unable to recover it. 00:29:17.328 [2024-07-21 11:52:46.665982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.328 [2024-07-21 11:52:46.666025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.328 [2024-07-21 11:52:46.666041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.328 [2024-07-21 11:52:46.666051] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.328 [2024-07-21 11:52:46.666062] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.328 [2024-07-21 11:52:46.676355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.328 qpair failed and we were unable to recover it. 00:29:17.328 [2024-07-21 11:52:46.686100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.328 [2024-07-21 11:52:46.686142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.328 [2024-07-21 11:52:46.686159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.328 [2024-07-21 11:52:46.686168] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.328 [2024-07-21 11:52:46.686176] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.328 [2024-07-21 11:52:46.696421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.328 qpair failed and we were unable to recover it. 00:29:17.328 [2024-07-21 11:52:46.706068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.328 [2024-07-21 11:52:46.706109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.328 [2024-07-21 11:52:46.706125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.328 [2024-07-21 11:52:46.706134] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.328 [2024-07-21 11:52:46.706142] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.328 [2024-07-21 11:52:46.716474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.328 qpair failed and we were unable to recover it. 00:29:17.328 [2024-07-21 11:52:46.726178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.328 [2024-07-21 11:52:46.726217] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.328 [2024-07-21 11:52:46.726233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.328 [2024-07-21 11:52:46.726242] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.328 [2024-07-21 11:52:46.726251] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.328 [2024-07-21 11:52:46.736618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.328 qpair failed and we were unable to recover it. 00:29:17.328 [2024-07-21 11:52:46.746252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.328 [2024-07-21 11:52:46.746290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.328 [2024-07-21 11:52:46.746307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.328 [2024-07-21 11:52:46.746316] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.328 [2024-07-21 11:52:46.746324] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.586 [2024-07-21 11:52:46.756468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.586 qpair failed and we were unable to recover it. 00:29:17.586 [2024-07-21 11:52:46.766257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.586 [2024-07-21 11:52:46.766296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.586 [2024-07-21 11:52:46.766313] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.586 [2024-07-21 11:52:46.766322] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.586 [2024-07-21 11:52:46.766331] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.586 [2024-07-21 11:52:46.776578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.586 qpair failed and we were unable to recover it. 00:29:17.586 [2024-07-21 11:52:46.786376] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.586 [2024-07-21 11:52:46.786412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.586 [2024-07-21 11:52:46.786428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.586 [2024-07-21 11:52:46.786437] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.586 [2024-07-21 11:52:46.786446] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.587 [2024-07-21 11:52:46.796636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.587 qpair failed and we were unable to recover it. 00:29:17.587 [2024-07-21 11:52:46.806383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.587 [2024-07-21 11:52:46.806423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.587 [2024-07-21 11:52:46.806439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.587 [2024-07-21 11:52:46.806448] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.587 [2024-07-21 11:52:46.806457] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.587 [2024-07-21 11:52:46.816785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.587 qpair failed and we were unable to recover it. 00:29:17.587 [2024-07-21 11:52:46.826329] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.587 [2024-07-21 11:52:46.826373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.587 [2024-07-21 11:52:46.826389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.587 [2024-07-21 11:52:46.826399] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.587 [2024-07-21 11:52:46.826407] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.587 [2024-07-21 11:52:46.836905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.587 qpair failed and we were unable to recover it. 00:29:17.587 [2024-07-21 11:52:46.846514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.587 [2024-07-21 11:52:46.846548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.587 [2024-07-21 11:52:46.846567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.587 [2024-07-21 11:52:46.846576] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.587 [2024-07-21 11:52:46.846585] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.587 [2024-07-21 11:52:46.857221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.587 qpair failed and we were unable to recover it. 00:29:17.587 [2024-07-21 11:52:46.866672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.587 [2024-07-21 11:52:46.866709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.587 [2024-07-21 11:52:46.866726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.587 [2024-07-21 11:52:46.866736] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.587 [2024-07-21 11:52:46.866745] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.587 [2024-07-21 11:52:46.877025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.587 qpair failed and we were unable to recover it. 00:29:17.587 [2024-07-21 11:52:46.886668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.587 [2024-07-21 11:52:46.886708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.587 [2024-07-21 11:52:46.886726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.587 [2024-07-21 11:52:46.886735] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.587 [2024-07-21 11:52:46.886744] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.587 [2024-07-21 11:52:46.897086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.587 qpair failed and we were unable to recover it. 00:29:17.587 [2024-07-21 11:52:46.906723] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.587 [2024-07-21 11:52:46.906763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.587 [2024-07-21 11:52:46.906780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.587 [2024-07-21 11:52:46.906789] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.587 [2024-07-21 11:52:46.906798] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.587 [2024-07-21 11:52:46.917094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.587 qpair failed and we were unable to recover it. 00:29:17.587 [2024-07-21 11:52:46.926745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.587 [2024-07-21 11:52:46.926785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.587 [2024-07-21 11:52:46.926801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.587 [2024-07-21 11:52:46.926812] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.587 [2024-07-21 11:52:46.926822] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.587 [2024-07-21 11:52:46.937086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.587 qpair failed and we were unable to recover it. 00:29:17.587 [2024-07-21 11:52:46.946795] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.587 [2024-07-21 11:52:46.946836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.587 [2024-07-21 11:52:46.946852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.587 [2024-07-21 11:52:46.946861] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.587 [2024-07-21 11:52:46.946870] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.587 [2024-07-21 11:52:46.957017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.587 qpair failed and we were unable to recover it. 00:29:17.587 [2024-07-21 11:52:46.966821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.587 [2024-07-21 11:52:46.966861] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.587 [2024-07-21 11:52:46.966878] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.587 [2024-07-21 11:52:46.966888] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.587 [2024-07-21 11:52:46.966897] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.587 [2024-07-21 11:52:46.977228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.587 qpair failed and we were unable to recover it. 00:29:17.587 [2024-07-21 11:52:46.986953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.587 [2024-07-21 11:52:46.986991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.587 [2024-07-21 11:52:46.987009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.587 [2024-07-21 11:52:46.987018] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.587 [2024-07-21 11:52:46.987027] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.587 [2024-07-21 11:52:46.997252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.587 qpair failed and we were unable to recover it. 00:29:17.587 [2024-07-21 11:52:47.007008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.587 [2024-07-21 11:52:47.007046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.587 [2024-07-21 11:52:47.007063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.587 [2024-07-21 11:52:47.007072] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.587 [2024-07-21 11:52:47.007081] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.846 [2024-07-21 11:52:47.017681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.846 qpair failed and we were unable to recover it. 00:29:17.846 [2024-07-21 11:52:47.027021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.846 [2024-07-21 11:52:47.027058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.846 [2024-07-21 11:52:47.027076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.847 [2024-07-21 11:52:47.027085] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.847 [2024-07-21 11:52:47.027094] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.847 [2024-07-21 11:52:47.037397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.847 qpair failed and we were unable to recover it. 00:29:17.847 [2024-07-21 11:52:47.047154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.847 [2024-07-21 11:52:47.047195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.847 [2024-07-21 11:52:47.047211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.847 [2024-07-21 11:52:47.047220] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.847 [2024-07-21 11:52:47.047229] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.847 [2024-07-21 11:52:47.057588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.847 qpair failed and we were unable to recover it. 00:29:17.847 [2024-07-21 11:52:47.067163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.847 [2024-07-21 11:52:47.067199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.847 [2024-07-21 11:52:47.067216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.847 [2024-07-21 11:52:47.067226] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.847 [2024-07-21 11:52:47.067235] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.847 [2024-07-21 11:52:47.077592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.847 qpair failed and we were unable to recover it. 00:29:17.847 [2024-07-21 11:52:47.087269] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.847 [2024-07-21 11:52:47.087304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.847 [2024-07-21 11:52:47.087320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.847 [2024-07-21 11:52:47.087329] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.847 [2024-07-21 11:52:47.087338] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.847 [2024-07-21 11:52:47.097672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.847 qpair failed and we were unable to recover it. 00:29:17.847 [2024-07-21 11:52:47.107301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.847 [2024-07-21 11:52:47.107339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.847 [2024-07-21 11:52:47.107354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.847 [2024-07-21 11:52:47.107364] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.847 [2024-07-21 11:52:47.107375] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.847 [2024-07-21 11:52:47.117826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.847 qpair failed and we were unable to recover it. 00:29:17.847 [2024-07-21 11:52:47.127377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.847 [2024-07-21 11:52:47.127420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.847 [2024-07-21 11:52:47.127436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.847 [2024-07-21 11:52:47.127445] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.847 [2024-07-21 11:52:47.127454] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.847 [2024-07-21 11:52:47.137965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.847 qpair failed and we were unable to recover it. 00:29:17.847 [2024-07-21 11:52:47.147389] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.847 [2024-07-21 11:52:47.147426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.847 [2024-07-21 11:52:47.147442] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.847 [2024-07-21 11:52:47.147451] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.847 [2024-07-21 11:52:47.147459] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.847 [2024-07-21 11:52:47.157851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.847 qpair failed and we were unable to recover it. 00:29:17.847 [2024-07-21 11:52:47.167499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.847 [2024-07-21 11:52:47.167536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.847 [2024-07-21 11:52:47.167552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.847 [2024-07-21 11:52:47.167562] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.847 [2024-07-21 11:52:47.167570] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.847 [2024-07-21 11:52:47.177686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.847 qpair failed and we were unable to recover it. 00:29:17.847 [2024-07-21 11:52:47.187485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.847 [2024-07-21 11:52:47.187527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.847 [2024-07-21 11:52:47.187543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.847 [2024-07-21 11:52:47.187553] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.847 [2024-07-21 11:52:47.187561] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.847 [2024-07-21 11:52:47.197882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.847 qpair failed and we were unable to recover it. 00:29:17.847 [2024-07-21 11:52:47.207617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.847 [2024-07-21 11:52:47.207660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.847 [2024-07-21 11:52:47.207677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.847 [2024-07-21 11:52:47.207686] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.847 [2024-07-21 11:52:47.207694] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.847 [2024-07-21 11:52:47.218197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.847 qpair failed and we were unable to recover it. 00:29:17.847 [2024-07-21 11:52:47.227708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.847 [2024-07-21 11:52:47.227749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.847 [2024-07-21 11:52:47.227765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.847 [2024-07-21 11:52:47.227774] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.847 [2024-07-21 11:52:47.227783] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.847 [2024-07-21 11:52:47.238235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.847 qpair failed and we were unable to recover it. 00:29:17.847 [2024-07-21 11:52:47.247669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.847 [2024-07-21 11:52:47.247712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.847 [2024-07-21 11:52:47.247728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.847 [2024-07-21 11:52:47.247737] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.847 [2024-07-21 11:52:47.247746] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.847 [2024-07-21 11:52:47.258204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.847 qpair failed and we were unable to recover it. 00:29:18.105 [2024-07-21 11:52:47.267752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.105 [2024-07-21 11:52:47.267787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.105 [2024-07-21 11:52:47.267803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.105 [2024-07-21 11:52:47.267812] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.105 [2024-07-21 11:52:47.267821] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.105 [2024-07-21 11:52:47.278253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.105 qpair failed and we were unable to recover it. 00:29:18.105 [2024-07-21 11:52:47.287748] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.105 [2024-07-21 11:52:47.287787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.105 [2024-07-21 11:52:47.287804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.105 [2024-07-21 11:52:47.287817] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.105 [2024-07-21 11:52:47.287825] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.105 [2024-07-21 11:52:47.298213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.105 qpair failed and we were unable to recover it. 00:29:18.105 [2024-07-21 11:52:47.307950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.105 [2024-07-21 11:52:47.307994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.105 [2024-07-21 11:52:47.308010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.105 [2024-07-21 11:52:47.308019] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.105 [2024-07-21 11:52:47.308028] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.105 [2024-07-21 11:52:47.318405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.105 qpair failed and we were unable to recover it. 00:29:18.105 [2024-07-21 11:52:47.327978] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.105 [2024-07-21 11:52:47.328019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.105 [2024-07-21 11:52:47.328035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.105 [2024-07-21 11:52:47.328044] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.105 [2024-07-21 11:52:47.328052] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.105 [2024-07-21 11:52:47.338456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.105 qpair failed and we were unable to recover it. 00:29:18.105 [2024-07-21 11:52:47.349335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.105 [2024-07-21 11:52:47.349370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.105 [2024-07-21 11:52:47.349386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.105 [2024-07-21 11:52:47.349396] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.105 [2024-07-21 11:52:47.349404] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.105 [2024-07-21 11:52:47.358306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.105 qpair failed and we were unable to recover it. 00:29:18.105 [2024-07-21 11:52:47.368064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.105 [2024-07-21 11:52:47.368103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.106 [2024-07-21 11:52:47.368120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.106 [2024-07-21 11:52:47.368129] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.106 [2024-07-21 11:52:47.368138] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.106 [2024-07-21 11:52:47.378655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.106 qpair failed and we were unable to recover it. 00:29:18.106 [2024-07-21 11:52:47.388212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.106 [2024-07-21 11:52:47.388254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.106 [2024-07-21 11:52:47.388271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.106 [2024-07-21 11:52:47.388280] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.106 [2024-07-21 11:52:47.388289] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.106 [2024-07-21 11:52:47.398465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.106 qpair failed and we were unable to recover it. 00:29:18.106 [2024-07-21 11:52:47.408198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.106 [2024-07-21 11:52:47.408241] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.106 [2024-07-21 11:52:47.408258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.106 [2024-07-21 11:52:47.408267] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.106 [2024-07-21 11:52:47.408275] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.106 [2024-07-21 11:52:47.418801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.106 qpair failed and we were unable to recover it. 00:29:18.106 [2024-07-21 11:52:47.428255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.106 [2024-07-21 11:52:47.428293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.106 [2024-07-21 11:52:47.428309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.106 [2024-07-21 11:52:47.428318] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.106 [2024-07-21 11:52:47.428326] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.106 [2024-07-21 11:52:47.438690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.106 qpair failed and we were unable to recover it. 00:29:18.106 [2024-07-21 11:52:47.448313] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.106 [2024-07-21 11:52:47.448355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.106 [2024-07-21 11:52:47.448371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.106 [2024-07-21 11:52:47.448380] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.106 [2024-07-21 11:52:47.448389] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.106 [2024-07-21 11:52:47.458871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.106 qpair failed and we were unable to recover it. 00:29:18.106 [2024-07-21 11:52:47.468322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.106 [2024-07-21 11:52:47.468361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.106 [2024-07-21 11:52:47.468381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.106 [2024-07-21 11:52:47.468390] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.106 [2024-07-21 11:52:47.468399] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.106 [2024-07-21 11:52:47.478611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.106 qpair failed and we were unable to recover it. 00:29:18.106 [2024-07-21 11:52:47.488435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.106 [2024-07-21 11:52:47.488477] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.106 [2024-07-21 11:52:47.488493] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.106 [2024-07-21 11:52:47.488502] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.106 [2024-07-21 11:52:47.488510] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.106 [2024-07-21 11:52:47.499155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.106 qpair failed and we were unable to recover it. 00:29:18.106 [2024-07-21 11:52:47.508483] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.106 [2024-07-21 11:52:47.508526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.106 [2024-07-21 11:52:47.508543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.106 [2024-07-21 11:52:47.508552] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.106 [2024-07-21 11:52:47.508561] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.106 [2024-07-21 11:52:47.518901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.106 qpair failed and we were unable to recover it. 00:29:18.364 [2024-07-21 11:52:47.528637] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.364 [2024-07-21 11:52:47.528677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.364 [2024-07-21 11:52:47.528692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.364 [2024-07-21 11:52:47.528702] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.364 [2024-07-21 11:52:47.528710] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.364 [2024-07-21 11:52:47.538832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.364 qpair failed and we were unable to recover it. 00:29:18.364 [2024-07-21 11:52:47.548565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.364 [2024-07-21 11:52:47.548609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.364 [2024-07-21 11:52:47.548635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.364 [2024-07-21 11:52:47.548645] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.364 [2024-07-21 11:52:47.548657] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.364 [2024-07-21 11:52:47.559132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.364 qpair failed and we were unable to recover it. 00:29:18.364 [2024-07-21 11:52:47.568755] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.364 [2024-07-21 11:52:47.568796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-07-21 11:52:47.568812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-07-21 11:52:47.568821] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-07-21 11:52:47.568830] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.365 [2024-07-21 11:52:47.579286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-07-21 11:52:47.588699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-07-21 11:52:47.588736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-07-21 11:52:47.588752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-07-21 11:52:47.588761] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-07-21 11:52:47.588770] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.365 [2024-07-21 11:52:47.599033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-07-21 11:52:47.608727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-07-21 11:52:47.608767] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-07-21 11:52:47.608783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-07-21 11:52:47.608792] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-07-21 11:52:47.608801] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.365 [2024-07-21 11:52:47.619289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-07-21 11:52:47.628838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-07-21 11:52:47.628877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-07-21 11:52:47.628893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-07-21 11:52:47.628903] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-07-21 11:52:47.628911] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.365 [2024-07-21 11:52:47.639307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-07-21 11:52:47.648982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-07-21 11:52:47.649019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-07-21 11:52:47.649035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-07-21 11:52:47.649044] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-07-21 11:52:47.649053] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.365 [2024-07-21 11:52:47.659549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-07-21 11:52:47.669086] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-07-21 11:52:47.669123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-07-21 11:52:47.669139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-07-21 11:52:47.669149] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-07-21 11:52:47.669157] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.365 [2024-07-21 11:52:47.679556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-07-21 11:52:47.689105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-07-21 11:52:47.689142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-07-21 11:52:47.689158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-07-21 11:52:47.689167] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-07-21 11:52:47.689176] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.365 [2024-07-21 11:52:47.699576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-07-21 11:52:47.709200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-07-21 11:52:47.709242] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-07-21 11:52:47.709259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-07-21 11:52:47.709268] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-07-21 11:52:47.709276] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.365 [2024-07-21 11:52:47.719525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-07-21 11:52:47.729191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-07-21 11:52:47.729231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-07-21 11:52:47.729247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-07-21 11:52:47.729262] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-07-21 11:52:47.729270] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.365 [2024-07-21 11:52:47.739733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-07-21 11:52:47.749240] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-07-21 11:52:47.749276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-07-21 11:52:47.749292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-07-21 11:52:47.749301] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-07-21 11:52:47.749310] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.365 [2024-07-21 11:52:47.759438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-07-21 11:52:47.769297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-07-21 11:52:47.769338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-07-21 11:52:47.769355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-07-21 11:52:47.769364] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-07-21 11:52:47.769373] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.365 [2024-07-21 11:52:47.779872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-21 11:52:47.789372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.623 [2024-07-21 11:52:47.789410] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.623 [2024-07-21 11:52:47.789426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.623 [2024-07-21 11:52:47.789435] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.623 [2024-07-21 11:52:47.789444] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.623 [2024-07-21 11:52:47.799684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-21 11:52:47.809363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.623 [2024-07-21 11:52:47.809404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.623 [2024-07-21 11:52:47.809419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.623 [2024-07-21 11:52:47.809428] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.623 [2024-07-21 11:52:47.809437] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.623 [2024-07-21 11:52:47.819726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-21 11:52:47.829493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.623 [2024-07-21 11:52:47.829532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.623 [2024-07-21 11:52:47.829547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.623 [2024-07-21 11:52:47.829557] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.623 [2024-07-21 11:52:47.829565] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.623 [2024-07-21 11:52:47.839841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-21 11:52:47.849585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.623 [2024-07-21 11:52:47.849629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.623 [2024-07-21 11:52:47.849645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.623 [2024-07-21 11:52:47.849654] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.623 [2024-07-21 11:52:47.849663] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.623 [2024-07-21 11:52:47.860091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-21 11:52:47.869575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.623 [2024-07-21 11:52:47.869612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.623 [2024-07-21 11:52:47.869633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.623 [2024-07-21 11:52:47.869642] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.623 [2024-07-21 11:52:47.869651] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.623 [2024-07-21 11:52:47.880007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-21 11:52:47.889657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.623 [2024-07-21 11:52:47.889697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.623 [2024-07-21 11:52:47.889714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.623 [2024-07-21 11:52:47.889723] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.623 [2024-07-21 11:52:47.889732] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.623 [2024-07-21 11:52:47.900231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-21 11:52:47.909771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.623 [2024-07-21 11:52:47.909809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.623 [2024-07-21 11:52:47.909828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.623 [2024-07-21 11:52:47.909837] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.623 [2024-07-21 11:52:47.909846] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.623 [2024-07-21 11:52:47.920072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-21 11:52:47.929867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.623 [2024-07-21 11:52:47.929904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.623 [2024-07-21 11:52:47.929920] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.623 [2024-07-21 11:52:47.929929] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.623 [2024-07-21 11:52:47.929937] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.623 [2024-07-21 11:52:47.940243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-21 11:52:47.949960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.623 [2024-07-21 11:52:47.950003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.623 [2024-07-21 11:52:47.950023] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.623 [2024-07-21 11:52:47.950032] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.623 [2024-07-21 11:52:47.950041] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.623 [2024-07-21 11:52:47.960256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-21 11:52:47.969982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.623 [2024-07-21 11:52:47.970028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.623 [2024-07-21 11:52:47.970045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.623 [2024-07-21 11:52:47.970054] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.623 [2024-07-21 11:52:47.970062] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.623 [2024-07-21 11:52:47.980389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-21 11:52:47.989988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.623 [2024-07-21 11:52:47.990028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.623 [2024-07-21 11:52:47.990044] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.623 [2024-07-21 11:52:47.990053] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.623 [2024-07-21 11:52:47.990062] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.623 [2024-07-21 11:52:48.000276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-21 11:52:48.010003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.624 [2024-07-21 11:52:48.010042] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.624 [2024-07-21 11:52:48.010058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.624 [2024-07-21 11:52:48.010067] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.624 [2024-07-21 11:52:48.010075] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.624 [2024-07-21 11:52:48.020368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-21 11:52:48.030113] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.624 [2024-07-21 11:52:48.030157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.624 [2024-07-21 11:52:48.030173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.624 [2024-07-21 11:52:48.030183] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.624 [2024-07-21 11:52:48.030193] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.624 [2024-07-21 11:52:48.040396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.882 [2024-07-21 11:52:48.050107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.882 [2024-07-21 11:52:48.050146] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.882 [2024-07-21 11:52:48.050162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.882 [2024-07-21 11:52:48.050171] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.882 [2024-07-21 11:52:48.050180] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.882 [2024-07-21 11:52:48.060672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.882 qpair failed and we were unable to recover it. 00:29:18.882 [2024-07-21 11:52:48.070238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.882 [2024-07-21 11:52:48.070278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.882 [2024-07-21 11:52:48.070296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.882 [2024-07-21 11:52:48.070305] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.882 [2024-07-21 11:52:48.070314] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.882 [2024-07-21 11:52:48.080614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.882 qpair failed and we were unable to recover it. 00:29:18.882 [2024-07-21 11:52:48.090181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.882 [2024-07-21 11:52:48.090223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.882 [2024-07-21 11:52:48.090240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.882 [2024-07-21 11:52:48.090249] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.882 [2024-07-21 11:52:48.090258] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.882 [2024-07-21 11:52:48.100582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.882 qpair failed and we were unable to recover it. 00:29:18.882 [2024-07-21 11:52:48.110271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.882 [2024-07-21 11:52:48.110309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.882 [2024-07-21 11:52:48.110324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.882 [2024-07-21 11:52:48.110334] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.882 [2024-07-21 11:52:48.110343] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.882 [2024-07-21 11:52:48.120559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.882 qpair failed and we were unable to recover it. 00:29:18.882 [2024-07-21 11:52:48.130342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.882 [2024-07-21 11:52:48.130377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.882 [2024-07-21 11:52:48.130394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.882 [2024-07-21 11:52:48.130403] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.882 [2024-07-21 11:52:48.130411] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.882 [2024-07-21 11:52:48.141069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.882 qpair failed and we were unable to recover it. 00:29:18.882 [2024-07-21 11:52:48.150449] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.882 [2024-07-21 11:52:48.150491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.882 [2024-07-21 11:52:48.150507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.882 [2024-07-21 11:52:48.150516] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.882 [2024-07-21 11:52:48.150525] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.882 [2024-07-21 11:52:48.160818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.882 qpair failed and we were unable to recover it. 00:29:18.882 [2024-07-21 11:52:48.170402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.882 [2024-07-21 11:52:48.170441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.882 [2024-07-21 11:52:48.170458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.882 [2024-07-21 11:52:48.170470] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.882 [2024-07-21 11:52:48.170479] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.882 [2024-07-21 11:52:48.180955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.882 qpair failed and we were unable to recover it. 00:29:18.882 [2024-07-21 11:52:48.190562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.882 [2024-07-21 11:52:48.190609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.882 [2024-07-21 11:52:48.190638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.882 [2024-07-21 11:52:48.190648] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.882 [2024-07-21 11:52:48.190657] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.882 [2024-07-21 11:52:48.200941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.882 qpair failed and we were unable to recover it. 00:29:18.882 [2024-07-21 11:52:48.210590] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.882 [2024-07-21 11:52:48.210631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.882 [2024-07-21 11:52:48.210647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.882 [2024-07-21 11:52:48.210656] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.882 [2024-07-21 11:52:48.210665] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.882 [2024-07-21 11:52:48.220994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.882 qpair failed and we were unable to recover it. 00:29:18.882 [2024-07-21 11:52:48.230647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.882 [2024-07-21 11:52:48.230682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.882 [2024-07-21 11:52:48.230698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.882 [2024-07-21 11:52:48.230707] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.882 [2024-07-21 11:52:48.230716] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.882 [2024-07-21 11:52:48.241034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.882 qpair failed and we were unable to recover it. 00:29:18.882 [2024-07-21 11:52:48.250714] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.882 [2024-07-21 11:52:48.250756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.882 [2024-07-21 11:52:48.250771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.882 [2024-07-21 11:52:48.250780] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.882 [2024-07-21 11:52:48.250789] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.882 [2024-07-21 11:52:48.261015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.882 qpair failed and we were unable to recover it. 00:29:18.882 [2024-07-21 11:52:48.270701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.882 [2024-07-21 11:52:48.270738] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.882 [2024-07-21 11:52:48.270755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.882 [2024-07-21 11:52:48.270764] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.882 [2024-07-21 11:52:48.270772] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.882 [2024-07-21 11:52:48.281174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.882 qpair failed and we were unable to recover it. 00:29:18.882 [2024-07-21 11:52:48.290792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.883 [2024-07-21 11:52:48.290833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.883 [2024-07-21 11:52:48.290849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.883 [2024-07-21 11:52:48.290858] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.883 [2024-07-21 11:52:48.290867] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.883 [2024-07-21 11:52:48.301100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.883 qpair failed and we were unable to recover it. 00:29:19.141 [2024-07-21 11:52:48.310810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.141 [2024-07-21 11:52:48.310850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.141 [2024-07-21 11:52:48.310866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.141 [2024-07-21 11:52:48.310875] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.141 [2024-07-21 11:52:48.310884] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.141 [2024-07-21 11:52:48.321085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.141 qpair failed and we were unable to recover it. 00:29:19.141 [2024-07-21 11:52:48.330957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.141 [2024-07-21 11:52:48.330996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.141 [2024-07-21 11:52:48.331012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.141 [2024-07-21 11:52:48.331021] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.141 [2024-07-21 11:52:48.331030] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.141 [2024-07-21 11:52:48.341253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.141 qpair failed and we were unable to recover it. 00:29:19.141 [2024-07-21 11:52:48.350971] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.141 [2024-07-21 11:52:48.351013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.141 [2024-07-21 11:52:48.351033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.141 [2024-07-21 11:52:48.351042] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.141 [2024-07-21 11:52:48.351051] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.141 [2024-07-21 11:52:48.361549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.141 qpair failed and we were unable to recover it. 00:29:19.141 [2024-07-21 11:52:48.370974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.141 [2024-07-21 11:52:48.371012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.141 [2024-07-21 11:52:48.371028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.141 [2024-07-21 11:52:48.371038] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.141 [2024-07-21 11:52:48.371047] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.141 [2024-07-21 11:52:48.381438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.141 qpair failed and we were unable to recover it. 00:29:19.141 [2024-07-21 11:52:48.391055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.141 [2024-07-21 11:52:48.391100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.141 [2024-07-21 11:52:48.391117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.141 [2024-07-21 11:52:48.391126] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.141 [2024-07-21 11:52:48.391135] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.141 [2024-07-21 11:52:48.401451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.141 qpair failed and we were unable to recover it. 00:29:19.141 [2024-07-21 11:52:48.411064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.141 [2024-07-21 11:52:48.411103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.141 [2024-07-21 11:52:48.411119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.141 [2024-07-21 11:52:48.411129] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.141 [2024-07-21 11:52:48.411137] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.141 [2024-07-21 11:52:48.421586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.141 qpair failed and we were unable to recover it. 00:29:19.141 [2024-07-21 11:52:48.431263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.141 [2024-07-21 11:52:48.431301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.141 [2024-07-21 11:52:48.431318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.141 [2024-07-21 11:52:48.431327] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.141 [2024-07-21 11:52:48.431336] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.141 [2024-07-21 11:52:48.441573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.141 qpair failed and we were unable to recover it. 00:29:19.141 [2024-07-21 11:52:48.451213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.141 [2024-07-21 11:52:48.451252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.141 [2024-07-21 11:52:48.451268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.141 [2024-07-21 11:52:48.451277] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.141 [2024-07-21 11:52:48.451286] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.141 [2024-07-21 11:52:48.461637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.141 qpair failed and we were unable to recover it. 00:29:19.141 [2024-07-21 11:52:48.471244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.141 [2024-07-21 11:52:48.471278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.141 [2024-07-21 11:52:48.471294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.141 [2024-07-21 11:52:48.471304] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.141 [2024-07-21 11:52:48.471313] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.141 [2024-07-21 11:52:48.481617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.141 qpair failed and we were unable to recover it. 00:29:19.141 [2024-07-21 11:52:48.491267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.141 [2024-07-21 11:52:48.491304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.141 [2024-07-21 11:52:48.491319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.141 [2024-07-21 11:52:48.491329] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.141 [2024-07-21 11:52:48.491337] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.141 [2024-07-21 11:52:48.501698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.142 qpair failed and we were unable to recover it. 00:29:20.514 Read completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Write completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Write completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Write completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Write completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Read completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Read completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Read completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Write completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Read completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Write completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Read completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Write completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Read completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Read completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Read completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Write completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Read completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Read completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Read completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Read completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Write completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Read completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Write completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Write completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.514 Read completed with error (sct=0, sc=8) 00:29:20.514 starting I/O failed 00:29:20.515 Read completed with error (sct=0, sc=8) 00:29:20.515 starting I/O failed 00:29:20.515 Read completed with error (sct=0, sc=8) 00:29:20.515 starting I/O failed 00:29:20.515 Read completed with error (sct=0, sc=8) 00:29:20.515 starting I/O failed 00:29:20.515 Read completed with error (sct=0, sc=8) 00:29:20.515 starting I/O failed 00:29:20.515 Write completed with error (sct=0, sc=8) 00:29:20.515 starting I/O failed 00:29:20.515 Write completed with error (sct=0, sc=8) 00:29:20.515 starting I/O failed 00:29:20.515 [2024-07-21 11:52:49.506885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:20.515 [2024-07-21 11:52:49.506908] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:20.515 A controller has encountered a failure and is being reset. 00:29:20.515 [2024-07-21 11:52:49.514300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.515 [2024-07-21 11:52:49.514352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.515 [2024-07-21 11:52:49.514378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.515 [2024-07-21 11:52:49.514392] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.515 [2024-07-21 11:52:49.514405] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:20.515 [2024-07-21 11:52:49.525070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-07-21 11:52:49.534581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.515 [2024-07-21 11:52:49.534622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.515 [2024-07-21 11:52:49.534643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.515 [2024-07-21 11:52:49.534653] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.515 [2024-07-21 11:52:49.534662] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:20.515 [2024-07-21 11:52:49.545124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-07-21 11:52:49.554701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.515 [2024-07-21 11:52:49.554747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.515 [2024-07-21 11:52:49.554779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.515 [2024-07-21 11:52:49.554798] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.515 [2024-07-21 11:52:49.554816] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d15c0 00:29:20.515 [2024-07-21 11:52:49.565103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-07-21 11:52:49.574751] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.515 [2024-07-21 11:52:49.574794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.515 [2024-07-21 11:52:49.574816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.515 [2024-07-21 11:52:49.574829] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.515 [2024-07-21 11:52:49.574842] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d15c0 00:29:20.515 [2024-07-21 11:52:49.585313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-07-21 11:52:49.594709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.515 [2024-07-21 11:52:49.594756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.515 [2024-07-21 11:52:49.594778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.515 [2024-07-21 11:52:49.594789] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.515 [2024-07-21 11:52:49.594797] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:20.515 [2024-07-21 11:52:49.605272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-07-21 11:52:49.614851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.515 [2024-07-21 11:52:49.614893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.515 [2024-07-21 11:52:49.614910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.515 [2024-07-21 11:52:49.614920] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.515 [2024-07-21 11:52:49.614929] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:20.515 [2024-07-21 11:52:49.625322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.515 qpair failed and we were unable to recover it. 00:29:20.515 [2024-07-21 11:52:49.625491] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:20.515 [2024-07-21 11:52:49.658753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:20.515 Controller properly reset. 00:29:20.515 Initializing NVMe Controllers 00:29:20.515 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:20.515 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:20.515 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:20.515 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:20.515 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:20.515 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:20.515 Initialization complete. Launching workers. 00:29:20.515 Starting thread on core 1 00:29:20.515 Starting thread on core 2 00:29:20.515 Starting thread on core 3 00:29:20.515 Starting thread on core 0 00:29:20.515 11:52:49 -- host/target_disconnect.sh@59 -- # sync 00:29:20.515 00:29:20.515 real 0m12.576s 00:29:20.515 user 0m27.029s 00:29:20.515 sys 0m3.251s 00:29:20.515 11:52:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:20.515 11:52:49 -- common/autotest_common.sh@10 -- # set +x 00:29:20.515 ************************************ 00:29:20.515 END TEST nvmf_target_disconnect_tc2 00:29:20.515 ************************************ 00:29:20.515 11:52:49 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:29:20.515 11:52:49 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:29:20.515 11:52:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:20.515 11:52:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:20.515 11:52:49 -- common/autotest_common.sh@10 -- # set +x 00:29:20.515 ************************************ 00:29:20.515 START TEST nvmf_target_disconnect_tc3 00:29:20.515 ************************************ 00:29:20.515 11:52:49 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc3 00:29:20.515 11:52:49 -- host/target_disconnect.sh@65 -- # reconnectpid=2523323 00:29:20.515 11:52:49 -- host/target_disconnect.sh@67 -- # sleep 2 00:29:20.515 11:52:49 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:29:20.515 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.413 11:52:51 -- host/target_disconnect.sh@68 -- # kill -9 2521972 00:29:22.413 11:52:51 -- host/target_disconnect.sh@70 -- # sleep 2 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Write completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Write completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Write completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Write completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Write completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Write completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Write completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Write completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Write completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 [2024-07-21 11:52:52.961604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:24.449 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 2521972 Killed "${NVMF_APP[@]}" "$@" 00:29:24.449 11:52:53 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:29:24.449 11:52:53 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:24.449 11:52:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:24.449 11:52:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:24.449 11:52:53 -- common/autotest_common.sh@10 -- # set +x 00:29:24.449 11:52:53 -- nvmf/common.sh@469 -- # nvmfpid=2523968 00:29:24.449 11:52:53 -- nvmf/common.sh@470 -- # waitforlisten 2523968 00:29:24.449 11:52:53 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:24.449 11:52:53 -- common/autotest_common.sh@819 -- # '[' -z 2523968 ']' 00:29:24.449 11:52:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.449 11:52:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:24.449 11:52:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.449 11:52:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:24.449 11:52:53 -- common/autotest_common.sh@10 -- # set +x 00:29:24.449 [2024-07-21 11:52:53.826963] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:24.449 [2024-07-21 11:52:53.827022] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.707 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.707 [2024-07-21 11:52:53.932606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:24.707 Write completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Read completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Write completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Write completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Write completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Read completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Write completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Write completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Read completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Write completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Read completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Write completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Read completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Read completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Write completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Write completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Write completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Read completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Write completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Read completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Read completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Read completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Write completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Write completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Read completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Write completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Read completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Read completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Read completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Read completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Read completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 Write completed with error (sct=0, sc=8) 00:29:24.707 starting I/O failed 00:29:24.707 [2024-07-21 11:52:53.966763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.707 [2024-07-21 11:52:53.968768] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:24.707 [2024-07-21 11:52:53.968872] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.707 [2024-07-21 11:52:53.968883] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.707 [2024-07-21 11:52:53.968892] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.707 [2024-07-21 11:52:53.969010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:24.707 [2024-07-21 11:52:53.969120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:24.707 [2024-07-21 11:52:53.969230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:24.707 [2024-07-21 11:52:53.969231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:25.272 11:52:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:25.272 11:52:54 -- common/autotest_common.sh@852 -- # return 0 00:29:25.272 11:52:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:25.272 11:52:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:25.272 11:52:54 -- common/autotest_common.sh@10 -- # set +x 00:29:25.272 11:52:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.272 11:52:54 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:25.272 11:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.272 11:52:54 -- common/autotest_common.sh@10 -- # set +x 00:29:25.272 Malloc0 00:29:25.272 11:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.272 11:52:54 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:25.272 11:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.272 11:52:54 -- common/autotest_common.sh@10 -- # set +x 00:29:25.530 [2024-07-21 11:52:54.722532] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x25127d0/0x251eb40) succeed. 00:29:25.530 [2024-07-21 11:52:54.733206] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2513dc0/0x25bec40) succeed. 00:29:25.530 11:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.530 11:52:54 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:25.530 11:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.530 11:52:54 -- common/autotest_common.sh@10 -- # set +x 00:29:25.530 11:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.530 11:52:54 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:25.530 11:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.530 11:52:54 -- common/autotest_common.sh@10 -- # set +x 00:29:25.530 11:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.530 11:52:54 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:29:25.530 11:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.530 11:52:54 -- common/autotest_common.sh@10 -- # set +x 00:29:25.530 [2024-07-21 11:52:54.881728] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:29:25.530 11:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.530 11:52:54 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:29:25.530 11:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.530 11:52:54 -- common/autotest_common.sh@10 -- # set +x 00:29:25.530 11:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.530 11:52:54 -- host/target_disconnect.sh@73 -- # wait 2523323 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Read completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Read completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Read completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Read completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Read completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Read completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Read completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Read completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Read completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Read completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Read completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Read completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Read completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 Write completed with error (sct=0, sc=8) 00:29:25.788 starting I/O failed 00:29:25.788 [2024-07-21 11:52:54.971880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.788 [2024-07-21 11:52:54.973375] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:25.788 [2024-07-21 11:52:54.973395] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:25.788 [2024-07-21 11:52:54.973403] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:26.720 [2024-07-21 11:52:55.977345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.720 qpair failed and we were unable to recover it. 00:29:26.720 [2024-07-21 11:52:55.978895] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:26.720 [2024-07-21 11:52:55.978912] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:26.720 [2024-07-21 11:52:55.978920] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:27.651 [2024-07-21 11:52:56.982769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.651 qpair failed and we were unable to recover it. 00:29:27.651 [2024-07-21 11:52:56.984328] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:27.651 [2024-07-21 11:52:56.984344] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:27.651 [2024-07-21 11:52:56.984352] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:28.584 [2024-07-21 11:52:57.988242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.584 qpair failed and we were unable to recover it. 00:29:28.584 [2024-07-21 11:52:57.989730] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:28.584 [2024-07-21 11:52:57.989747] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:28.584 [2024-07-21 11:52:57.989755] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:29.956 [2024-07-21 11:52:58.993652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.956 qpair failed and we were unable to recover it. 00:29:29.956 [2024-07-21 11:52:58.995090] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:29.956 [2024-07-21 11:52:58.995106] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:29.956 [2024-07-21 11:52:58.995114] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:30.890 [2024-07-21 11:52:59.998929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.890 qpair failed and we were unable to recover it. 00:29:30.890 [2024-07-21 11:53:00.000357] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:30.890 [2024-07-21 11:53:00.000373] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:30.890 [2024-07-21 11:53:00.000381] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:31.822 [2024-07-21 11:53:01.004305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.822 qpair failed and we were unable to recover it. 00:29:31.822 [2024-07-21 11:53:01.005776] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:31.822 [2024-07-21 11:53:01.005793] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:31.822 [2024-07-21 11:53:01.005801] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:32.754 [2024-07-21 11:53:02.009734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.754 qpair failed and we were unable to recover it. 00:29:32.754 [2024-07-21 11:53:02.011378] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:32.754 [2024-07-21 11:53:02.011401] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:32.754 [2024-07-21 11:53:02.011410] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:33.686 [2024-07-21 11:53:03.015180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.686 qpair failed and we were unable to recover it. 00:29:33.686 [2024-07-21 11:53:03.016680] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:33.686 [2024-07-21 11:53:03.016696] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:33.686 [2024-07-21 11:53:03.016704] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:34.614 [2024-07-21 11:53:04.020584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.614 qpair failed and we were unable to recover it. 00:29:34.614 [2024-07-21 11:53:04.022283] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:34.614 [2024-07-21 11:53:04.022311] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:34.614 [2024-07-21 11:53:04.022323] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:35.982 [2024-07-21 11:53:05.026179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.982 qpair failed and we were unable to recover it. 00:29:35.982 [2024-07-21 11:53:05.027633] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:35.982 [2024-07-21 11:53:05.027650] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:35.982 [2024-07-21 11:53:05.027658] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:36.911 [2024-07-21 11:53:06.031635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.911 qpair failed and we were unable to recover it. 00:29:36.911 [2024-07-21 11:53:06.031759] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:36.911 A controller has encountered a failure and is being reset. 00:29:36.911 Resorting to new failover address 192.168.100.9 00:29:37.841 Write completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Write completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Write completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Write completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Write completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Write completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Write completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Write completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Write completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Write completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Write completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Write completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 Read completed with error (sct=0, sc=8) 00:29:37.841 starting I/O failed 00:29:37.841 [2024-07-21 11:53:07.036855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.841 [2024-07-21 11:53:07.038390] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:37.841 [2024-07-21 11:53:07.038409] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:37.841 [2024-07-21 11:53:07.038421] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:29:38.769 [2024-07-21 11:53:08.042188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.769 qpair failed and we were unable to recover it. 00:29:38.769 [2024-07-21 11:53:08.043908] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:38.769 [2024-07-21 11:53:08.043927] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:38.769 [2024-07-21 11:53:08.043938] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:29:39.698 [2024-07-21 11:53:09.047841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.698 qpair failed and we were unable to recover it. 00:29:39.698 [2024-07-21 11:53:09.047918] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.698 [2024-07-21 11:53:09.048017] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:39.698 [2024-07-21 11:53:09.078573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:39.698 Controller properly reset. 00:29:39.698 Initializing NVMe Controllers 00:29:39.698 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.698 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.698 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:39.698 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:39.698 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:39.698 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:39.698 Initialization complete. Launching workers. 00:29:39.698 Starting thread on core 1 00:29:39.698 Starting thread on core 2 00:29:39.698 Starting thread on core 3 00:29:39.698 Starting thread on core 0 00:29:39.955 11:53:09 -- host/target_disconnect.sh@74 -- # sync 00:29:39.955 00:29:39.955 real 0m19.367s 00:29:39.955 user 1m2.746s 00:29:39.955 sys 0m6.334s 00:29:39.955 11:53:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:39.955 11:53:09 -- common/autotest_common.sh@10 -- # set +x 00:29:39.955 ************************************ 00:29:39.955 END TEST nvmf_target_disconnect_tc3 00:29:39.955 ************************************ 00:29:39.955 11:53:09 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:39.955 11:53:09 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:29:39.955 11:53:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:39.955 11:53:09 -- nvmf/common.sh@116 -- # sync 00:29:39.955 11:53:09 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:39.955 11:53:09 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:39.955 11:53:09 -- nvmf/common.sh@119 -- # set +e 00:29:39.955 11:53:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:39.955 11:53:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:39.955 rmmod nvme_rdma 00:29:39.955 rmmod nvme_fabrics 00:29:39.955 11:53:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:39.955 11:53:09 -- nvmf/common.sh@123 -- # set -e 00:29:39.955 11:53:09 -- nvmf/common.sh@124 -- # return 0 00:29:39.955 11:53:09 -- nvmf/common.sh@477 -- # '[' -n 2523968 ']' 00:29:39.955 11:53:09 -- nvmf/common.sh@478 -- # killprocess 2523968 00:29:39.955 11:53:09 -- common/autotest_common.sh@926 -- # '[' -z 2523968 ']' 00:29:39.955 11:53:09 -- common/autotest_common.sh@930 -- # kill -0 2523968 00:29:39.955 11:53:09 -- common/autotest_common.sh@931 -- # uname 00:29:39.955 11:53:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:39.955 11:53:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2523968 00:29:39.955 11:53:09 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:29:39.955 11:53:09 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:29:39.955 11:53:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2523968' 00:29:39.955 killing process with pid 2523968 00:29:39.955 11:53:09 -- common/autotest_common.sh@945 -- # kill 2523968 00:29:39.955 11:53:09 -- common/autotest_common.sh@950 -- # wait 2523968 00:29:40.212 11:53:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:40.212 11:53:09 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:40.212 00:29:40.212 real 0m42.020s 00:29:40.212 user 2m34.755s 00:29:40.212 sys 0m16.634s 00:29:40.212 11:53:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:40.212 11:53:09 -- common/autotest_common.sh@10 -- # set +x 00:29:40.212 ************************************ 00:29:40.212 END TEST nvmf_target_disconnect 00:29:40.212 ************************************ 00:29:40.212 11:53:09 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:29:40.212 11:53:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:40.212 11:53:09 -- common/autotest_common.sh@10 -- # set +x 00:29:40.212 11:53:09 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:29:40.212 00:29:40.212 real 22m6.760s 00:29:40.212 user 68m1.182s 00:29:40.212 sys 5m43.720s 00:29:40.469 11:53:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:40.469 11:53:09 -- common/autotest_common.sh@10 -- # set +x 00:29:40.469 ************************************ 00:29:40.469 END TEST nvmf_rdma 00:29:40.469 ************************************ 00:29:40.469 11:53:09 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:29:40.469 11:53:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:40.469 11:53:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:40.469 11:53:09 -- common/autotest_common.sh@10 -- # set +x 00:29:40.469 ************************************ 00:29:40.469 START TEST spdkcli_nvmf_rdma 00:29:40.469 ************************************ 00:29:40.469 11:53:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:29:40.469 * Looking for test storage... 00:29:40.469 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:29:40.469 11:53:09 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:29:40.469 11:53:09 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:40.469 11:53:09 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:29:40.469 11:53:09 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.469 11:53:09 -- nvmf/common.sh@7 -- # uname -s 00:29:40.469 11:53:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.469 11:53:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.469 11:53:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.469 11:53:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.469 11:53:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.469 11:53:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.469 11:53:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.469 11:53:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.469 11:53:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.469 11:53:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.469 11:53:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:40.469 11:53:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:40.469 11:53:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.469 11:53:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.469 11:53:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:40.469 11:53:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:40.469 11:53:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.469 11:53:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.469 11:53:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.469 11:53:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.469 11:53:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.469 11:53:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.469 11:53:09 -- paths/export.sh@5 -- # export PATH 00:29:40.469 11:53:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.469 11:53:09 -- nvmf/common.sh@46 -- # : 0 00:29:40.469 11:53:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:40.469 11:53:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:40.469 11:53:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:40.469 11:53:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.469 11:53:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.469 11:53:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:40.469 11:53:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:40.469 11:53:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:40.469 11:53:09 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:40.469 11:53:09 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:40.469 11:53:09 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:40.469 11:53:09 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:40.469 11:53:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:40.469 11:53:09 -- common/autotest_common.sh@10 -- # set +x 00:29:40.469 11:53:09 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:40.469 11:53:09 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2526709 00:29:40.469 11:53:09 -- spdkcli/common.sh@34 -- # waitforlisten 2526709 00:29:40.469 11:53:09 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:40.469 11:53:09 -- common/autotest_common.sh@819 -- # '[' -z 2526709 ']' 00:29:40.469 11:53:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.469 11:53:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:40.469 11:53:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.469 11:53:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:40.469 11:53:09 -- common/autotest_common.sh@10 -- # set +x 00:29:40.469 [2024-07-21 11:53:09.863156] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:40.469 [2024-07-21 11:53:09.863212] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526709 ] 00:29:40.726 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.726 [2024-07-21 11:53:09.947788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:40.726 [2024-07-21 11:53:09.986945] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:40.726 [2024-07-21 11:53:09.987134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.726 [2024-07-21 11:53:09.987137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.291 11:53:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:41.291 11:53:10 -- common/autotest_common.sh@852 -- # return 0 00:29:41.291 11:53:10 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:41.291 11:53:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:41.291 11:53:10 -- common/autotest_common.sh@10 -- # set +x 00:29:41.291 11:53:10 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:41.291 11:53:10 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:29:41.291 11:53:10 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:29:41.291 11:53:10 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:29:41.291 11:53:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.291 11:53:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:41.291 11:53:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:41.291 11:53:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:41.291 11:53:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.548 11:53:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:41.548 11:53:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.548 11:53:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:41.548 11:53:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:41.548 11:53:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:41.548 11:53:10 -- common/autotest_common.sh@10 -- # set +x 00:29:49.656 11:53:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:49.656 11:53:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:49.656 11:53:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:49.656 11:53:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:49.656 11:53:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:49.656 11:53:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:49.656 11:53:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:49.656 11:53:18 -- nvmf/common.sh@294 -- # net_devs=() 00:29:49.656 11:53:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:49.656 11:53:18 -- nvmf/common.sh@295 -- # e810=() 00:29:49.656 11:53:18 -- nvmf/common.sh@295 -- # local -ga e810 00:29:49.656 11:53:18 -- nvmf/common.sh@296 -- # x722=() 00:29:49.656 11:53:18 -- nvmf/common.sh@296 -- # local -ga x722 00:29:49.656 11:53:18 -- nvmf/common.sh@297 -- # mlx=() 00:29:49.656 11:53:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:49.656 11:53:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:49.656 11:53:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:49.656 11:53:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:49.656 11:53:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:49.656 11:53:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:49.656 11:53:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:49.656 11:53:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:49.656 11:53:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:49.656 11:53:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:49.656 11:53:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:49.656 11:53:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:49.656 11:53:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:49.656 11:53:18 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:29:49.656 11:53:18 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:29:49.656 11:53:18 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:29:49.656 11:53:18 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:29:49.656 11:53:18 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:29:49.656 11:53:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:49.656 11:53:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:49.656 11:53:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:49.656 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:49.656 11:53:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:49.656 11:53:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:49.656 11:53:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:49.656 11:53:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:49.656 11:53:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:49.656 11:53:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:49.656 11:53:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:49.656 11:53:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:49.656 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:49.656 11:53:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:49.656 11:53:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:49.656 11:53:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:49.657 11:53:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:49.657 11:53:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:49.657 11:53:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:49.657 11:53:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:49.657 11:53:18 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:29:49.657 11:53:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:49.657 11:53:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.657 11:53:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:49.657 11:53:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.657 11:53:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:49.657 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:49.657 11:53:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.657 11:53:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:49.657 11:53:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.657 11:53:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:49.657 11:53:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.657 11:53:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:49.657 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:49.657 11:53:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.657 11:53:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:49.657 11:53:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:49.657 11:53:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:49.657 11:53:18 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:29:49.657 11:53:18 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:29:49.657 11:53:18 -- nvmf/common.sh@408 -- # rdma_device_init 00:29:49.657 11:53:18 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:29:49.657 11:53:18 -- nvmf/common.sh@57 -- # uname 00:29:49.657 11:53:18 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:29:49.657 11:53:18 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:29:49.657 11:53:18 -- nvmf/common.sh@62 -- # modprobe ib_core 00:29:49.657 11:53:18 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:29:49.657 11:53:18 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:29:49.657 11:53:18 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:29:49.657 11:53:18 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:29:49.657 11:53:19 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:29:49.657 11:53:19 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:29:49.657 11:53:19 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:49.657 11:53:19 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:29:49.657 11:53:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:49.657 11:53:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:49.657 11:53:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:49.657 11:53:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:49.657 11:53:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:49.657 11:53:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:49.657 11:53:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.657 11:53:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:49.657 11:53:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:49.657 11:53:19 -- nvmf/common.sh@104 -- # continue 2 00:29:49.657 11:53:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:49.657 11:53:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.657 11:53:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:49.657 11:53:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.657 11:53:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:49.657 11:53:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:49.657 11:53:19 -- nvmf/common.sh@104 -- # continue 2 00:29:49.657 11:53:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:49.657 11:53:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:29:49.657 11:53:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:49.657 11:53:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:49.657 11:53:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:49.657 11:53:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:49.657 11:53:19 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:29:49.657 11:53:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:29:49.657 11:53:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:29:49.657 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:49.657 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:49.657 altname enp217s0f0np0 00:29:49.657 altname ens818f0np0 00:29:49.657 inet 192.168.100.8/24 scope global mlx_0_0 00:29:49.657 valid_lft forever preferred_lft forever 00:29:49.657 11:53:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:49.657 11:53:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:29:49.657 11:53:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:49.657 11:53:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:49.914 11:53:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:49.914 11:53:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:49.914 11:53:19 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:29:49.914 11:53:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:29:49.914 11:53:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:29:49.914 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:49.914 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:49.914 altname enp217s0f1np1 00:29:49.914 altname ens818f1np1 00:29:49.914 inet 192.168.100.9/24 scope global mlx_0_1 00:29:49.914 valid_lft forever preferred_lft forever 00:29:49.914 11:53:19 -- nvmf/common.sh@410 -- # return 0 00:29:49.914 11:53:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:49.914 11:53:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:49.914 11:53:19 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:29:49.914 11:53:19 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:29:49.914 11:53:19 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:29:49.914 11:53:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:49.914 11:53:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:49.914 11:53:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:49.914 11:53:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:49.914 11:53:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:49.914 11:53:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:49.914 11:53:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.914 11:53:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:49.914 11:53:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:49.914 11:53:19 -- nvmf/common.sh@104 -- # continue 2 00:29:49.914 11:53:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:49.914 11:53:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.914 11:53:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:49.914 11:53:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.914 11:53:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:49.914 11:53:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:49.914 11:53:19 -- nvmf/common.sh@104 -- # continue 2 00:29:49.914 11:53:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:49.914 11:53:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:29:49.914 11:53:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:49.914 11:53:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:49.914 11:53:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:49.914 11:53:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:49.914 11:53:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:49.914 11:53:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:29:49.914 11:53:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:49.914 11:53:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:49.914 11:53:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:49.914 11:53:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:49.914 11:53:19 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:29:49.914 192.168.100.9' 00:29:49.914 11:53:19 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:29:49.914 192.168.100.9' 00:29:49.914 11:53:19 -- nvmf/common.sh@445 -- # head -n 1 00:29:49.914 11:53:19 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:49.914 11:53:19 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:29:49.914 192.168.100.9' 00:29:49.914 11:53:19 -- nvmf/common.sh@446 -- # tail -n +2 00:29:49.914 11:53:19 -- nvmf/common.sh@446 -- # head -n 1 00:29:49.914 11:53:19 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:49.914 11:53:19 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:29:49.914 11:53:19 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:49.914 11:53:19 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:29:49.914 11:53:19 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:29:49.914 11:53:19 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:29:49.914 11:53:19 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:29:49.914 11:53:19 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:49.914 11:53:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:49.914 11:53:19 -- common/autotest_common.sh@10 -- # set +x 00:29:49.914 11:53:19 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:49.914 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:49.914 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:49.914 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:49.914 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:49.914 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:49.914 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:49.914 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:49.914 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:49.914 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:49.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:29:49.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:49.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:49.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:49.915 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:49.915 ' 00:29:50.171 [2024-07-21 11:53:19.542605] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:52.694 [2024-07-21 11:53:21.606810] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1934d40/0x1a657c0) succeed. 00:29:52.694 [2024-07-21 11:53:21.616839] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1936420/0x1945640) succeed. 00:29:53.627 [2024-07-21 11:53:22.847650] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:29:56.155 [2024-07-21 11:53:25.014578] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:29:57.530 [2024-07-21 11:53:26.872863] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:29:58.901 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:58.901 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:58.901 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:58.901 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:58.901 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:58.901 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:58.901 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:58.901 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:58.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:58.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:58.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:58.901 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:58.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:58.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:58.901 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:58.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:58.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:58.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:58.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:58.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:58.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:58.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:58.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:58.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:29:58.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:58.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:58.901 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:58.901 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:59.158 11:53:28 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:59.158 11:53:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:59.158 11:53:28 -- common/autotest_common.sh@10 -- # set +x 00:29:59.158 11:53:28 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:59.158 11:53:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:59.158 11:53:28 -- common/autotest_common.sh@10 -- # set +x 00:29:59.158 11:53:28 -- spdkcli/nvmf.sh@69 -- # check_match 00:29:59.158 11:53:28 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:59.414 11:53:28 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:59.672 11:53:28 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:59.672 11:53:28 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:59.672 11:53:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:59.672 11:53:28 -- common/autotest_common.sh@10 -- # set +x 00:29:59.672 11:53:28 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:59.672 11:53:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:59.672 11:53:28 -- common/autotest_common.sh@10 -- # set +x 00:29:59.672 11:53:28 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:59.672 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:59.672 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:59.672 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:59.672 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:29:59.672 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:29:59.672 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:59.672 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:59.672 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:59.672 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:59.672 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:59.672 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:59.672 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:59.672 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:59.672 ' 00:30:05.024 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:05.024 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:05.024 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:05.024 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:05.024 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:30:05.024 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:30:05.024 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:05.024 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:05.024 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:05.024 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:05.024 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:05.024 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:05.024 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:05.024 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:05.024 11:53:33 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:05.024 11:53:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:05.024 11:53:33 -- common/autotest_common.sh@10 -- # set +x 00:30:05.024 11:53:33 -- spdkcli/nvmf.sh@90 -- # killprocess 2526709 00:30:05.024 11:53:33 -- common/autotest_common.sh@926 -- # '[' -z 2526709 ']' 00:30:05.024 11:53:33 -- common/autotest_common.sh@930 -- # kill -0 2526709 00:30:05.024 11:53:33 -- common/autotest_common.sh@931 -- # uname 00:30:05.025 11:53:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:05.025 11:53:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2526709 00:30:05.025 11:53:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:05.025 11:53:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:05.025 11:53:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2526709' 00:30:05.025 killing process with pid 2526709 00:30:05.025 11:53:33 -- common/autotest_common.sh@945 -- # kill 2526709 00:30:05.025 [2024-07-21 11:53:33.980269] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:05.025 11:53:33 -- common/autotest_common.sh@950 -- # wait 2526709 00:30:05.025 11:53:34 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:30:05.025 11:53:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:05.025 11:53:34 -- nvmf/common.sh@116 -- # sync 00:30:05.025 11:53:34 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:30:05.025 11:53:34 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:30:05.025 11:53:34 -- nvmf/common.sh@119 -- # set +e 00:30:05.025 11:53:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:05.025 11:53:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:30:05.025 rmmod nvme_rdma 00:30:05.025 rmmod nvme_fabrics 00:30:05.025 11:53:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:05.025 11:53:34 -- nvmf/common.sh@123 -- # set -e 00:30:05.025 11:53:34 -- nvmf/common.sh@124 -- # return 0 00:30:05.025 11:53:34 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:30:05.025 11:53:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:05.025 11:53:34 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:30:05.025 00:30:05.025 real 0m24.580s 00:30:05.025 user 0m52.201s 00:30:05.025 sys 0m7.434s 00:30:05.025 11:53:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:05.025 11:53:34 -- common/autotest_common.sh@10 -- # set +x 00:30:05.025 ************************************ 00:30:05.025 END TEST spdkcli_nvmf_rdma 00:30:05.025 ************************************ 00:30:05.025 11:53:34 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:30:05.025 11:53:34 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:30:05.025 11:53:34 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:30:05.025 11:53:34 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:30:05.025 11:53:34 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:30:05.025 11:53:34 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:30:05.025 11:53:34 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:30:05.025 11:53:34 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:30:05.025 11:53:34 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:30:05.025 11:53:34 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:30:05.025 11:53:34 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:30:05.025 11:53:34 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:30:05.025 11:53:34 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:30:05.025 11:53:34 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:30:05.025 11:53:34 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:30:05.025 11:53:34 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:30:05.025 11:53:34 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:30:05.025 11:53:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:05.025 11:53:34 -- common/autotest_common.sh@10 -- # set +x 00:30:05.025 11:53:34 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:30:05.025 11:53:34 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:30:05.025 11:53:34 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:30:05.025 11:53:34 -- common/autotest_common.sh@10 -- # set +x 00:30:11.575 INFO: APP EXITING 00:30:11.575 INFO: killing all VMs 00:30:11.575 INFO: killing vhost app 00:30:11.575 WARN: no vhost pid file found 00:30:11.575 INFO: EXIT DONE 00:30:14.857 Waiting for block devices as requested 00:30:14.857 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:14.857 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:14.857 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:15.115 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:15.115 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:15.115 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:15.115 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:15.401 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:15.401 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:15.401 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:15.659 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:15.659 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:15.659 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:15.917 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:15.917 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:15.917 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:16.175 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:30:20.358 Cleaning 00:30:20.358 Removing: /var/run/dpdk/spdk0/config 00:30:20.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:20.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:20.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:20.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:20.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:20.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:20.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:20.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:20.358 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:20.358 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:20.358 Removing: /var/run/dpdk/spdk1/config 00:30:20.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:20.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:20.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:20.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:20.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:20.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:20.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:20.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:20.358 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:20.359 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:20.359 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:20.359 Removing: /var/run/dpdk/spdk2/config 00:30:20.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:20.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:20.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:20.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:20.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:20.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:20.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:20.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:20.359 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:20.359 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:20.359 Removing: /var/run/dpdk/spdk3/config 00:30:20.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:20.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:20.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:20.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:20.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:20.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:20.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:20.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:20.616 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:20.616 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:20.616 Removing: /var/run/dpdk/spdk4/config 00:30:20.616 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:20.616 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:20.616 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:20.616 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:20.616 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:20.616 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:20.616 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:20.616 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:20.616 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:20.616 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:20.616 Removing: /dev/shm/bdevperf_trace.pid2340663 00:30:20.616 Removing: /dev/shm/bdevperf_trace.pid2444025 00:30:20.616 Removing: /dev/shm/bdev_svc_trace.1 00:30:20.616 Removing: /dev/shm/nvmf_trace.0 00:30:20.616 Removing: /dev/shm/spdk_tgt_trace.pid2161058 00:30:20.616 Removing: /var/run/dpdk/spdk0 00:30:20.616 Removing: /var/run/dpdk/spdk1 00:30:20.616 Removing: /var/run/dpdk/spdk2 00:30:20.616 Removing: /var/run/dpdk/spdk3 00:30:20.616 Removing: /var/run/dpdk/spdk4 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2154861 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2157020 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2161058 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2162677 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2171585 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2173060 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2173376 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2173705 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2174039 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2174363 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2174652 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2174865 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2175105 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2176116 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2179689 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2180050 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2180486 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2180559 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2181125 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2181235 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2181717 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2181979 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2182279 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2182296 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2182584 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2182763 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2183260 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2183623 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2183955 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2184303 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2184473 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2184646 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2185036 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2185321 00:30:20.616 Removing: /var/run/dpdk/spdk_pid2185596 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2185804 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2185958 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2186191 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2186457 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2186742 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2187017 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2187298 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2187516 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2187719 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2187886 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2188159 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2188430 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2188718 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2188986 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2189267 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2189430 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2189632 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2189846 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2190135 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2190401 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2190684 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2190950 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2191161 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2191313 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2191546 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2191820 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2192101 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2192383 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2192672 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2192913 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2193102 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2193273 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2193544 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2193812 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2194101 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2194371 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2194655 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2194721 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2195065 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2199904 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2302095 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2306926 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2318606 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2324720 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2329193 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2330012 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2340663 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2340985 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2345998 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2352606 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2355190 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2367465 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2395714 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2399974 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2405174 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2441829 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2442822 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2444025 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2448890 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2457694 00:30:20.873 Removing: /var/run/dpdk/spdk_pid2458959 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2460027 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2460847 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2461373 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2466433 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2466501 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2471732 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2472272 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2472872 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2473612 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2473647 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2476063 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2477949 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2479831 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2481749 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2483628 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2485570 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2492653 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2493118 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2495441 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2496738 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2504985 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2507736 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2513915 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2514183 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2520925 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2521395 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2523323 00:30:21.130 Removing: /var/run/dpdk/spdk_pid2526709 00:30:21.130 Clean 00:30:21.130 killing process with pid 2098043 00:30:39.244 killing process with pid 2098040 00:30:39.244 killing process with pid 2098042 00:30:39.244 killing process with pid 2098041 00:30:39.244 11:54:07 -- common/autotest_common.sh@1436 -- # return 0 00:30:39.244 11:54:07 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:30:39.244 11:54:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:39.244 11:54:07 -- common/autotest_common.sh@10 -- # set +x 00:30:39.244 11:54:07 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:30:39.244 11:54:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:39.244 11:54:07 -- common/autotest_common.sh@10 -- # set +x 00:30:39.244 11:54:07 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:30:39.244 11:54:07 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:30:39.244 11:54:07 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:30:39.244 11:54:07 -- spdk/autotest.sh@394 -- # hash lcov 00:30:39.244 11:54:07 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:39.244 11:54:07 -- spdk/autotest.sh@396 -- # hostname 00:30:39.244 11:54:07 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:30:39.244 geninfo: WARNING: invalid characters removed from testname! 00:30:57.324 11:54:25 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:59.225 11:54:28 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:00.601 11:54:29 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:01.988 11:54:31 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:03.890 11:54:32 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:05.265 11:54:34 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:06.639 11:54:35 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:06.639 11:54:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:06.639 11:54:36 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:06.639 11:54:36 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.639 11:54:36 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.639 11:54:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.639 11:54:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.639 11:54:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.639 11:54:36 -- paths/export.sh@5 -- $ export PATH 00:31:06.639 11:54:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.639 11:54:36 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:31:06.639 11:54:36 -- common/autobuild_common.sh@435 -- $ date +%s 00:31:06.639 11:54:36 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1721555676.XXXXXX 00:31:06.639 11:54:36 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1721555676.Km0aOz 00:31:06.639 11:54:36 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:31:06.639 11:54:36 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:31:06.639 11:54:36 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:31:06.639 11:54:36 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:31:06.639 11:54:36 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:06.639 11:54:36 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:06.639 11:54:36 -- common/autobuild_common.sh@451 -- $ get_config_params 00:31:06.639 11:54:36 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:31:06.639 11:54:36 -- common/autotest_common.sh@10 -- $ set +x 00:31:06.904 11:54:36 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:31:06.904 11:54:36 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:31:06.904 11:54:36 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:06.904 11:54:36 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:06.904 11:54:36 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:06.904 11:54:36 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:06.904 11:54:36 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:06.904 11:54:36 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:06.904 11:54:36 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:06.904 11:54:36 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:31:06.904 11:54:36 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:06.904 + [[ -n 2044042 ]] 00:31:06.904 + sudo kill 2044042 00:31:06.914 [Pipeline] } 00:31:06.933 [Pipeline] // stage 00:31:06.939 [Pipeline] } 00:31:06.956 [Pipeline] // timeout 00:31:06.961 [Pipeline] } 00:31:06.978 [Pipeline] // catchError 00:31:06.983 [Pipeline] } 00:31:07.001 [Pipeline] // wrap 00:31:07.007 [Pipeline] } 00:31:07.017 [Pipeline] // catchError 00:31:07.025 [Pipeline] stage 00:31:07.026 [Pipeline] { (Epilogue) 00:31:07.039 [Pipeline] catchError 00:31:07.040 [Pipeline] { 00:31:07.055 [Pipeline] echo 00:31:07.057 Cleanup processes 00:31:07.063 [Pipeline] sh 00:31:07.392 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:07.392 2550179 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:07.410 [Pipeline] sh 00:31:07.696 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:07.696 ++ grep -v 'sudo pgrep' 00:31:07.696 ++ awk '{print $1}' 00:31:07.696 + sudo kill -9 00:31:07.696 + true 00:31:07.708 [Pipeline] sh 00:31:07.989 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:07.989 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:31:14.547 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:31:17.842 [Pipeline] sh 00:31:18.127 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:18.127 Artifacts sizes are good 00:31:18.143 [Pipeline] archiveArtifacts 00:31:18.151 Archiving artifacts 00:31:18.377 [Pipeline] sh 00:31:18.667 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:31:18.682 [Pipeline] cleanWs 00:31:18.692 [WS-CLEANUP] Deleting project workspace... 00:31:18.692 [WS-CLEANUP] Deferred wipeout is used... 00:31:18.700 [WS-CLEANUP] done 00:31:18.702 [Pipeline] } 00:31:18.730 [Pipeline] // catchError 00:31:18.746 [Pipeline] sh 00:31:19.025 + logger -p user.info -t JENKINS-CI 00:31:19.035 [Pipeline] } 00:31:19.054 [Pipeline] // stage 00:31:19.060 [Pipeline] } 00:31:19.079 [Pipeline] // node 00:31:19.086 [Pipeline] End of Pipeline 00:31:19.122 Finished: SUCCESS